0:16 >> NARRATOR: Tonight--
0:17 >> The race to become an A.I. superpower is on...
0:20 >> NARRATOR: The politics of artificial intelligence...
0:22 >> There will be a Chinese tech sector
0:24 and there will be a American tech sector.
0:26 >> NARRATOR: The new tech war.
0:27 >> The more data, the better the A.I. works.
0:30 So in the age of A.I., where data is the new oil,
0:33 China is the new Saudi Arabia.
0:36 >> NARRATOR: The future of work...
0:37 >> When I increase productivity through automation,
0:40 jobs go away.
0:42 >> I believe about 50% of jobs will be somewhat
0:46 or extremely threatened by A.I. in the next 15 years or so.
0:50 >> NARRATOR: A.I. and corporate surveillance...
0:52 >> We thought that we were searching Google.
0:55 We had no idea that Google was searching us.
0:57 >> NARRATOR: And the threat to democracy.
1:00 >> China is on its way to building
1:02 a total surveillance state.
1:04 >> NARRATOR: Tonight on "Frontline"...
1:06 >> It has pervaded so many elements of everyday life.
1:09 How do we make it transparent and accountable?
1:11 >> NARRATOR: ..."In the Age of A.I."
1:16 ♪ ♪
1:42 >> NARRATOR: This is the world's most complex board game.
1:48 There are more possible moves in the game of Go
1:51 than there are atoms in the universe.
1:55 Legend has it that in 2300 BCE, Emperor Yao devised it
2:01 to teach his son discipline, concentration, and balance.
2:08 And, over 4,000 years later, this ancient Chinese game
2:12 would signal the start of a new industrial age.
2:17 ♪ ♪
2:25 It was 2016, in Seoul, South Korea.
2:31 >> Can machines overtake human intelligence?
2:35 A breakthrough moment when the world champion
2:37 of the Asian board game Go takes on an A.I. program
2:40 developed by Google.
2:42 >> (speaking Korean):
2:55 >> In countries where it's very popular,
2:56 like China and Japan and, and South Korea, to them,
3:00 Go is not just a game, right?
3:02 It's, like, how you learn strategy.
3:04 It has an almost spiritual component.
3:07 You know, if you talk to South Koreans, right,
3:09 and Lee Sedol is the world's greatest Go player,
3:11 he's a national hero in South Korea.
3:13 They were sure that Lee Sedol would beat AlphaGo hands down.
3:18 ♪ ♪
3:23 >> NARRATOR: Google's AlphaGo was a computer program that,
3:26 starting with the rules of Go
3:28 and a database of historical games,
3:31 had been designed to teach itself.
3:34 >> I was one of the commentators at the Lee Sedol games.
3:38 And yes, it was watched by tens of millions of people.
3:42 (man speaking Korean)
3:44 >> NARRATOR: Throughout Southeast Asia,
3:46 this was seen as a sports spectacle
3:48 with national pride at stake.
3:49 >> Wow, that was a player guess.
3:51 >> NARRATOR: But much more was in play.
3:53 This was the public unveiling
3:55 of a form of artificial intelligence
3:57 called deep learning,
4:00 that mimics the neural networks of the human brain.
4:03 >> So what happens with machine learning,
4:05 or artificial intelligence-- initially with AlphaGo--
4:08 is that the machine is fed all kinds of Go games,
4:12 and then it studies them, learns from them,
4:15 and figures out its own moves.
4:17 And because it's an A.I. system--
4:19 it's not just following instructions,
4:21 it's figuring out its own instructions--
4:23 it comes up with moves that humans hadn't thought of before.
4:26 So, it studies games that humans have played, it knows the rules,
4:31 and then it comes up with creative moves.
4:36 (woman speaking Korean)
4:39 (speaking Korean):
4:42 >> That's a very... that's a very surprising move.
4:44 >> I thought it was a mistake.
4:47 >> NARRATOR: Game two, move 37.
4:51 >> That move 37 was a move that humans could not fathom,
4:54 but yet it ended up being brilliant
4:57 and woke people up to say,
5:00 "Wow, after thousands of years of playing,
5:03 we never thought about making a move like that."
5:06 >> Oh, he resigned.
5:08 It looks like... Lee Sedol has just resigned, actually.
5:12 >> Yeah! >> Yes.
5:13 >> NARRATOR: In the end, the scientists watched
5:15 their algorithms win four of the games.
5:18 Lee Sedol took one.
5:20 >> What happened with Go, first and foremost,
5:22 was a huge victory for deep mind and for A.I., right?
5:25 It wasn't that the computers beat the humans,
5:28 it was that, you know, one type of intelligence beat another.
5:31 >> NARRATOR: Artificial intelligence had proven
5:34 it could marshal a vast amount of data,
5:36 beyond anything any human could handle,
5:40 and use it to teach itself how to predict an outcome.
5:44 The commercial implications were enormous.
5:48 >> While AlphaGo is a, is a toy game,
5:51 but its success and its waking everyone up, I think,
5:57 is, is going to be remembered as the pivotal moment
6:03 where A.I. became mature
6:07 and everybody jumped on the bandwagon.
6:09 ♪ ♪
6:10 >> NARRATOR: This is about the consequences of that defeat.
6:14 (man speaking local language)
6:16 How the A.I. algorithms are ushering in a new age
6:19 of great potential and prosperity,
6:24 but an age that will also deepen inequality, challenge democracy,
6:29 and divide the world into two A.I. superpowers.
6:35 Tonight, five stories about how artificial intelligence
6:39 is changing our world.
6:40 ♪ ♪
6:51 China has decided to chase the A.I. future.
6:56 >> The difference between the internet mindset
6:58 and the A.I. mindset...
7:00 >> NARRATOR: A future made and embraced by a new generation.
7:07 >> Well, it's hard not to feel the kind of immense energy,
7:10 and also the obvious fact of the demographics.
7:15 They're mostly very younger people,
7:18 so that this clearly is technology which is being
7:22 generated by a whole new generation.
7:26 >> NARRATOR: Orville Schell is one of
7:27 America's foremost China scholars.
7:30 >> (speaking Mandarin)
7:31 >> NARRATOR: He first came here 45 years ago.
7:34 >> When I, when I first came here, in 1975,
7:38 Chairman Mao was still alive,
7:40 the Cultural Revolution was coming on,
7:43 and there wasn't a single whiff of anything
7:47 of what you see here.
7:49 It was unimaginable.
7:50 In fact, in those years, one very much thought,
7:54 "This is the way China is, this is the way it's going to be."
8:00 And the fact that it has gone through
8:02 so many different changes since is quite extraordinary.
8:06 (man giving instructions)
8:08 >> NARRATOR: This extraordinary progress goes back
8:11 to that game of Go.
8:14 >> I think that the government recognized
8:16 that this was a sort of critical thing for the future,
8:18 and, "We need to catch up in this," that, you know,
8:20 "We cannot have a foreign company showing us up
8:22 at our own game.
8:24 And this is going to be something that is going to be
8:25 critically important in the future."
8:27 So, you know, we called it the Sputnik moment for,
8:29 for the Chinese government--
8:31 the Chinese government kind of woke up.
8:33 >> (translated): As we often say in China,
8:36 "The beginning is the most difficult part."
8:39 >> NARRATOR: In 2017, Xi Jinping announced
8:42 the government's bold new plans
8:44 to an audience of foreign diplomats.
8:47 China would catch up with the U.S. in artificial intelligence
8:51 by 2025 and lead the world by 2030.
8:55 >> (translated): ...and intensified cooperation
8:57 in frontier areas such as digital economy,
9:00 artificial intelligence, nanotechnology,
9:02 and accounting computing.
9:05 ♪ ♪
9:11 >> NARRATOR: Today, China leads the world in e-commerce.
9:18 Drones deliver to rural villages.
9:22 And a society that bypassed credit cards
9:25 now shops in stores without cashiers,
9:28 where the currency is facial recognition.
9:33 >> No country has ever moved that fast.
9:36 And in a short two-and-a-half years,
9:38 China's A.I. implementation really went from minimal amount
9:43 to probably about 17 or 18 unicorns,
9:47 that is billion-dollar companies, in A.I. today.
9:50 And that, that progress is, is hard to believe.
9:55 >> NARRATOR: The progress was powered by a new generation
9:57 of ambitious young techs pouring out of Chinese universities,
10:01 competing with each other for new ideas,
10:05 and financed by a new cadre of Chinese venture capitalists.
10:11 This is Sinovation,
10:13 created by U.S.-educated A.I. scientist and businessman
10:17 Kai-Fu Lee.
10:19 >> These unicorns-- we've got one, two, three, four, five,
10:24 six, in the general A.I. area.
10:27 And unicorn means a billion-dollar company,
10:29 a company whose valuation or market capitalization
10:33 is at $1 billion or higher.
10:36 I think we put two unicorns to show $5 billion or higher.
10:42 >> NARRATOR: Kai-Fu Lee was born in Taiwan.
10:45 His parents sent him to high school in Tennessee.
10:48 His PhD thesis at Carnegie Mellon
10:51 was on computer speech recognition,
10:53 which took him to Apple.
10:55 >> Well, reality is a step closer to science fiction,
10:57 with Apple Computers' new developed program...
11:00 >> NARRATOR: And at 31, an early measure of fame.
11:03 >> Kai-Fu Lee, the inventor of Apple's
11:06 speech-recognition technology.
11:07 >> Casper, copy this to Make Write 2.
11:10 Casper, paste.
11:12 Casper, 72-point italic outline.
11:15 >> NARRATOR: He would move on to Microsoft research in Asia
11:18 and became the head of Google China.
11:21 Ten years ago, he started Sinovation in Beijing,
11:26 and began looking for promising startups and A.I. talent.
11:30 >> So, the Chinese entrepreneurial companies
11:33 started as copycats.
11:35 But over the last 15 years, China has developed its own form
11:39 of entrepreneurship, and that entrepreneurship is described
11:45 as tenacious, very fast, winner-take-all,
11:50 and incredible work ethic.
11:52 I would say these few thousand Chinese top entrepreneurs,
11:57 they could take on any entrepreneur
11:59 anywhere in the world.
12:01 >> NARRATOR: Entrepreneurs like Cao Xudong,
12:04 the 33-year-old C.E.O. of a new startup called Momenta.
12:10 This is a ring road around Beijing.
12:12 The car is driving itself.
12:15 ♪ ♪
12:21 >> You see, another cutting, another cutting-in.
12:24 >> Another cut-in, yeah, yeah.
12:26 >> NARRATOR: Cao has no doubt about the inevitability
12:29 of autonomous vehicles.
12:33 >> Just like AlphaGo can beat the human player in, in Go,
12:39 I think the machine will definitely surpass
12:43 the human driver, in the end.
12:47 >> NARRATOR: Recently, there have been cautions
12:48 about how soon autonomous vehicles will be deployed,
12:53 but Cao and his team are confident
12:55 they're in for the long haul.
12:58 >> U.S. will be the first to deploy,
13:01 but China may be the first to popularize.
13:03 It is 50-50 right now.
13:05 U.S. is ahead in technology.
13:07 China has a larger market, and the Chinese government
13:10 is helping with infrastructure efforts--
13:12 for example, building a new city the size of Chicago
13:16 with autonomous driving enabled,
13:18 and also a new highway that has sensors built in
13:21 to help autonomous vehicle be safer.
13:24 >> NARRATOR: Their early investors included
13:27 Mercedes-Benz.
13:29 >> I feel very lucky and very inspiring
13:33 and very exciting that we're living in this era.
13:38 ♪ ♪
13:40 >> NARRATOR: Life in China is largely conducted
13:42 on smartphones.
13:45 A billion people use WeChat, the equivalent of Facebook,
13:48 Messenger, and PayPal, and much more,
13:51 combined into just one super-app.
13:54 And there are many more.
13:55 >> China is the best place for A.I. implementation today,
14:00 because the vast amount of data that's available in China.
14:04 China has a lot more users than any other country,
14:07 three to four times more than the U.S.
14:10 There are 50 times more mobile payments than the U.S.
14:14 There are ten times more food deliveries,
14:17 which serve as data to learn more about user behavior
14:21 than the U.S.
14:22 300 times more shared bicycle rides,
14:26 and each shared bicycle ride has all kinds of sensors
14:30 submitting data up to the cloud.
14:32 We're talking about maybe ten times more data than the U.S.,
14:36 and A.I. is basically run on data and fueled by data.
14:41 The more data, the better the A.I. works,
14:44 more importantly than how brilliant the researcher is
14:47 working on the problem.
14:49 So, in the age of A.I., where data is the new oil,
14:54 China is the new Saudi Arabia.
14:57 >> NARRATOR: And access to all that data
14:59 means that the deep-learning algorithm can quickly predict
15:02 behavior, like the creditworthiness of someone
15:05 wanting a short-term loan.
15:06 >> Here is our application.
15:09 And customer can choose how many money they want to borrow
15:13 and how long they want to borrow,
15:16 and they can input their datas here.
15:21 And after, after that, you can just borrow very quickly.
15:27 >> NARRATOR: The C.E.O. shows us how quickly you can get a loan.
15:31 >> It is, it has done.
15:33 >> NARRATOR: It takes an average of eight seconds.
15:35 >> It has passed to banks. >> Wow.
15:38 >> NARRATOR: In the eight seconds,
15:40 the algorithm has assessed 5,000 personal features
15:42 from all your data.
15:44 >> 5,000 features that is related with the delinquency,
15:50 when maybe the banks only use few, maybe, maybe ten features
15:57 when they are doing their risk amendment.
16:02 >> NARRATOR: Processing millions of transactions,
16:03 it'll dig up features that would never be apparent
16:06 to a human loan officer, like how confidently you type
16:11 your loan application, or, surprisingly,
16:15 if you keep your cell phone battery charged.
16:18 >> It's very interesting, the battery of the phone
16:21 is related with their delinquency rate.
16:24 Someone who has much more lower battery,
16:26 they get much more dangerous than others.
16:31 >> It's probably unfathomable to an American
16:34 how a country can dramatically evolve itself
16:39 from a copycat laggard to, all of a sudden,
16:43 to nearly as good as the U.S. in technology.
16:48 >> NARRATOR: Like this facial-recognition startup
16:50 he invested in.
16:51 Megvii was started by three young graduates in 2011.
16:56 It's now a world leader in using A.I. to identify people.
17:03 >> It's pretty fast.
17:05 For example, on the mobile device,
17:07 we have timed the facial-recognition speed.
17:10 It's actually less than 100 milliseconds.
17:13 So, that's very, very fast.
17:15 So 0.1 second that we can, we will be able to recognize you,
17:19 even on a mobile device.
17:24 >> NARRATOR: The company claims the system is better
17:26 than any human at identifying people in its database.
17:30 And for those who aren't, it can describe them.
17:33 Like our director-- what he's wearing,
17:36 and a good guess at his age, missing it by only a few months.
17:42 >> We are the first one to really take facial recognition
17:46 to commercial quality.
17:50 >> NARRATOR: That's why in Beijing today,
17:52 you can pay for your KFC with a smile.
17:57 >> You know, it's not so surprising,
17:59 we've seen Chinese companies catching up to the U.S.
18:01 in technology for a long time.
18:02 And so, if particular effort and attention is paid
18:05 in a specific sector, it's not so surprising
18:07 that they would surpass the rest of the world.
18:09 And facial recognition is one of the, really the first places
18:12 we've seen that start to happen.
18:15 >> NARRATOR: It's a technology prized by the government,
18:18 like this program in Shenzhen to discourage jaywalking.
18:23 Offenders are shamed in public-- and with facial recognition,
18:27 can be instantly fined.
18:31 Critics warn that the government and some private companies
18:34 have been building a national database
18:37 from dozens of experimental social-credit programs.
18:41 >> The government wants to integrate
18:43 all these individual behaviors, or corporations' records,
18:48 into some kind of metrics and compute out a single number
18:55 or set of number associated with a individual,
18:59 a citizen, and using that, to implement a incentive
19:04 or punishment system.
19:06 >> NARRATOR: A high social-credit number
19:07 can be rewarded with discounts on bus fares.
19:11 A low number can lead to a travel ban.
19:15 Some say it's very popular with a Chinese public
19:18 that wants to punish bad behavior.
19:21 Others see a future that rewards party loyalty
19:25 and silences criticism.
19:28 >> Right now, there is no final system being implemented.
19:32 And from those experiments, we already see that the possibility
19:41 of what this social-credit system can do to individual.
19:44 It's very powerful-- Orwellian-like--
19:48 and it's extremely troublesome in terms of civil liberty.
19:56 >> NARRATOR: Every evening in Shanghai,
19:58 ever-present cameras record the crowds
20:01 as they surge down to the Bund,
20:03 the promenade along the banks of the Huangpu River.
20:07 Once the great trading houses of Europe came here to do business
20:10 with the Middle Kingdom.
20:12 In the last century, they were all shut down
20:15 by Mao's revolution.
20:18 But now, in the age of A.I.,
20:20 people come here to take in a spectacle
20:22 that reflects China's remarkable progress.
20:26 (spectators gasp)
20:28 And illuminates the great political paradox of capitalism
20:32 taken root in the communist state.
20:37 >> People have called it market Leninism,
20:40 authoritarian capitalism.
20:43 We are watching a kind of a Petri dish
20:46 in which an experiment of, you know, extraordinary importance
20:54 to the world is being carried out.
20:55 Whether you can combine these things
20:59 and get something that's more powerful,
21:02 that's coherent, that's durable in the world.
21:04 Whether you can bring together a one-party state
21:07 with an innovative sector, both economically
21:12 and technologically innovative,
21:14 and that's something we thought could not coexist.
21:20 >> NARRATOR: As China reinvents itself,
21:23 it has set its sights on leading the world
21:25 in artificial intelligence by 2030.
21:29 But that means taking on the world's most innovative
21:32 A.I. culture.
21:34 ♪ ♪
21:46 On an interstate in the U.S. Southwest,
21:49 artificial intelligence is at work solving the problem
21:52 that's become emblematic of the new age,
21:56 replacing a human driver.
21:58 ♪ ♪
22:04 This is the company's C.E.O., 24-year-old Alex Rodrigues.
22:11 >> The more things we build successfully,
22:13 the less people ask questions
22:15 about how old you are when you have working trucks.
22:18 >> NARRATOR: And this is what he's built.
22:21 Commercial goods are being driven from California
22:24 to Arizona on Interstate 10.
22:29 There is a driver in the cab, but he's not driving.
22:34 It's a path set by a C.E.O. with an unusual CV.
22:40 >> Are we ready, Henry?
22:42 The aim is to score these pucks into the scoring area.
22:47 So I, I did competitive robotics starting when I was 11,
22:51 and I took it very, very seriously.
22:53 To, to give you a sense, I won the Robotics World Championships
22:55 for the first time when I was 13.
22:57 I've been to worlds seven times
22:59 between the ages of 13 and 20-ish.
23:02 I eventually founded a team,
23:04 did a lot of work at a very high competitive level.
23:07 Things looking pretty good.
23:08 >> NARRATOR: This was a prototype of sorts,
23:10 from which he has built his multi-million-dollar company.
23:15 >> I hadn't built a robot in a while, wanted to get back to it,
23:18 and felt that this was by far the most exciting piece
23:21 of robotics technology that was up and coming.
23:22 A lot of people told us we wouldn't be able to build it.
23:25 But knew roughly the techniques that you would use.
23:28 And I was pretty confident that if you put them together,
23:30 you would get something that worked.
23:32 Took the summer off, built in my parents' garage a golf cart
23:35 that could drive itself.
23:40 >> NARRATOR: That golf cart got the attention
23:42 of Silicon Valley, and the first of several rounds
23:45 of venture capital.
23:47 He formed a team and then decided the business opportunity
23:50 was in self-driving trucks.
23:53 He says there's also a human benefit.
23:56 >> If we can build a truck that's ten times safer
23:58 than a human driver, then not much else actually matters.
24:02 When we talk to regulators, especially,
24:05 everyone agrees that the only way that we're going to get
24:08 to zero highway deaths, which is everyone's objective,
24:11 is to use self-driving.
24:13 And so, I'm sure you've heard the statistic,
24:17 more than 90% of all crashes
24:19 have a human driver as the cause.
24:20 So if you want to solve traffic fatalities,
24:24 which, in my opinion, are the single biggest tragedy
24:28 that happens year after year in the United States,
24:30 this is the only solution.
24:33 >> NARRATOR: It's an ambitious goal,
24:36 but only possible because of the recent breakthroughs
24:38 in deep learning.
24:40 >> Artificial intelligence is one of those key pieces
24:42 that has made it possible now to do driverless vehicles
24:46 where it wasn't possible ten years ago,
24:49 particularly in the ability to see and understand scenes.
24:53 A lot of people don't know this, but it's remarkably hard
24:57 for computers, until very, very recently,
24:58 to do even the most basic visual tasks,
25:02 like seeing a picture of a person
25:04 and knowing that it's a person.
25:06 And we've made gigantic strides with artificial intelligence
25:09 in being able to see and understanding tasks,
25:11 and that's obviously fundamental to being able to understand
25:14 the world around you with the sensors that,
25:15 that you have available.
25:19 >> NARRATOR: That's now possible
25:21 because of the algorithms written by Yoshua Bengio
25:23 and a small group of scientists.
25:28 >> There are many aspects of the world
25:30 which we can't explain with words.
25:34 And that part of our knowledge is actually
25:36 probably the majority of it.
25:39 So, like, the stuff we can communicate verbally
25:41 is the tip of the iceberg.
25:43 And so to get at the bottom of the iceberg, the solution was,
25:48 the computers have to acquire that knowledge by themselves
25:53 from data, from examples.
25:54 Just like children learn, most not from their teachers,
25:58 but from interacting with the world,
26:01 and playing around, and, and trying things
26:03 and seeing what works and what doesn't work.
26:05 >> NARRATOR: This is an early demonstration.
26:07 In 2013, deep-mind scientists set a machine-learning program
26:12 on the Atari video game Breakout.
26:16 The computer was only told the goal-- to win the game.
26:19 After 100 games, it learned to use the bat at the bottom
26:24 to hit the ball and break the bricks at the top.
26:27 After 300, it could do that better than a human player.
26:33 After 500 games, it came up with a creative way to win the game--
26:37 by digging a tunnel on the side
26:40 and sending the ball around the top
26:42 to break many bricks with one hit.
26:44 That was deep learning.
26:48 >> That's the A.I. program based on learning,
26:50 really, that has been so successful
26:52 in the last few years and has...
26:54 It wasn't clear ten years ago that it would work,
26:57 but it has completely changed the map
27:00 and is now used in almost every sector of society.
27:06 >> Even the best and brightest among us,
27:08 we just don't have enough compute power
27:11 inside of our heads.
27:13 >> NARRATOR: Amy Webb is a professor at N.Y.U.
27:16 and founder of the Future Today Institute.
27:19 >> As A.I. progresses, the great promise is that they...
27:26 they, these, these machines, alongside of us,
27:30 are able to think and imagine and see things
27:34 in ways that we never have before,
27:36 which means that maybe we have some kind of new,
27:40 weird, seemingly implausible solution to climate change.
27:45 Maybe we have some radically different approach
27:49 to dealing with incurable cancers.
27:52 The real practical and wonderful promise is that machines help us
27:58 be more creative, and, using that creativity,
28:02 we get to terrific solutions.
28:06 >> NARRATOR: Solutions that could come unexpectedly
28:09 to urgent problems.
28:11 >> It's going to change the face of breast cancer.
28:13 Right now, 40,000 women in the U.S. alone
28:16 die from breast cancer every single year.
28:19 >> NARRATOR: Dr. Connie Lehman is head
28:21 of the breast imaging center
28:23 at Massachusetts General Hospital in Boston.
28:26 >> We've become so complacent about it,
28:28 we almost don't think it can really be changed.
28:31 We, we somehow think we should put all of our energy
28:33 into chemotherapies to save women
28:36 with metastatic breast cancer,
28:38 and yet, you know, when we find it early, we cure it,
28:41 and we cure it without having the ravages to the body
28:44 when we diagnose it late.
28:46 This shows the progression of a small, small spot from one year
28:51 to the next, and then to the diagnosis
28:54 of the small cancer here.
28:57 >> NARRATOR: This is what happened when a woman
28:59 who had been diagnosed with breast cancer
29:02 started to ask questions
29:04 about why it couldn't have been diagnosed earlier.
29:07 >> It really brings a lot of anxiety,
29:10 and you're asking the questions, you know,
29:12 "Am I going to survive?
29:13 What's going to happen to my son?"
29:15 And I start asking other questions.
29:19 >> NARRATOR: She was used to asking questions.
29:21 At M.I.T.'s artificial-intelligence lab,
29:24 Professor Regina Barzilay uses deep learning
29:27 to teach the computer to understand language,
29:31 as well as read text and data.
29:34 >> I was really surprised that the very basic question
29:37 that I ask my physicians,
29:39 which were really excellent physicians here at MGH,
29:43 they couldn't give me answers that I was looking for.
29:47 >> NARRATOR: She was convinced that if you analyze enough data,
29:50 from mammograms to diagnostic notes,
29:53 the computer could predict early-stage conditions.
29:56 >> If we fast-forward from 2012 to '13 to 2014,
30:02 we then see when Regina was diagnosed,
30:05 because of this spot on her mammogram.
30:10 Is it possible, with more elegant computer applications,
30:14 that we might have identified this spot the year before,
30:19 or even back here?
30:21 >> So, those are standard prediction problems
30:22 in machine learning-- there is nothing special about them.
30:26 And to my big surprise, none of the technologies
30:29 that we are developing at M.I.T.,
30:33 even in the most simple form, doesn't penetrate the hospital.
30:38 >> NARRATOR: Regina and Connie began the slow process
30:41 of getting access to thousands of mammograms and records
30:45 from MGH's breast-imaging program.
30:49 >> So, our first foray was just to take all of the patients
30:53 we had at MGH during a period of time,
30:56 who had had breast surgery for a certain type
30:58 of high-risk lesion.
31:00 And we found that most of them didn't really need the surgery.
31:03 They didn't have cancer.
31:05 But about ten percent did have cancer.
31:07 With Regina's techniques in deep learning
31:10 and machine learning, we were able to predict the women
31:13 that truly needed the surgery and separate out
31:15 those that really could avoid the unnecessary surgery.
31:19 >> What machine can do, it can take hundreds of thousands
31:23 of images where the outcome is known
31:25 and learn, based on how, you know, pixels are distributed,
31:30 what are the very unique patterns that correlate highly
31:35 with future occurrence of the disease.
31:38 So, instead of using human capacity
31:40 to kind of recognize pattern, formalize pattern--
31:44 which is inherently limited by our cognitive capacity
31:48 and how much we can see and remember--
31:50 we're providing machine with a lot of data
31:53 and make it learn this prediction.
31:57 >> So, we are using technology not only to be better
32:02 at assessing the breast density,
32:04 but to get more to the point of what we're trying to predict.
32:07 "Does this woman have a cancer now,
32:10 and will she develop a cancer in five years? "
32:13 And that's, again, where the artificial intelligence,
32:16 machine and deep learning can really help us
32:18 and our patients.
32:20 >> NARRATOR: In the age of A.I.,
32:22 the algorithms are transporting us into a universe
32:26 of vast potential and transforming almost every aspect
32:29 of human endeavor and experience.
32:34 Andrew McAfee is a research scientist at M.I.T.
32:38 who co-authored "The Second Machine Age."
32:42 >> The great compliment that a songwriter gives another one is,
32:45 "Gosh, I wish I had written that one."
32:46 The great compliment a geek gives another one is,
32:49 "Wow, I wish I had drawn that graph."
32:50 So, I wish I had drawn this graph.
32:53 >> NARRATOR: The graph uses a formula
32:55 to show human development and growth since 2000 BCE.
32:59 >> The state of human civilization
33:01 is not very advanced, and it's not getting better
33:04 very quickly at all, and this is true for thousands
33:07 and thousands of years.
33:08 When we, when we formed empires and empires got overturned,
33:12 when we tried democracy, when we invented zero
33:16 and mathematics and fundamental discoveries about the universe,
33:19 big deal.
33:21 It just, the numbers don't change very much.
33:23 What's weird is that the numbers change essentially in the blink
33:26 of an eye at one point in time.
33:28 And it goes from really horizontal, unchanging,
33:32 uninteresting, to, holy Toledo, crazy vertical.
33:36 And then the question is, what on Earth happened
33:39 to cause that change?
33:40 And the answer is the Industrial Revolution.
33:42 There were other things that happened,
33:44 but really what fundamentally happened is
33:46 we overcame the limitations of our muscle power.
33:49 Something equally interesting is happening right now.
33:52 We are overcoming the limitations of our minds.
33:55 We're not getting rid of them,
33:56 we're not making them unnecessary,
33:58 but, holy cow, can we leverage them and amplify them now.
34:02 You have to be a huge pessimist
34:04 not to find that profoundly good news.
34:06 >> I really do think the world has entered a new era.
34:09 Artificial intelligence holds so much promise,
34:12 but it's going to reshape every aspect of the economy,
34:15 so many aspects of our lives.
34:17 Because A.I. is a little bit like electricity.
34:20 Everybody's going to use it.
34:22 Every company is going to be incorporating A.I.,
34:26 integrating it into what they do,
34:28 governments are going to be using it,
34:29 nonprofit organizations are going to be using it.
34:33 It's going to create all kinds of benefits
34:37 in ways large and small, and challenges for us, as well.
34:41 >> NARRATOR: The challenges, the benefits--
34:44 the autonomous truck represents both
34:47 as it maneuvers into the marketplace.
34:50 The engineers are confident that, in spite of questions
34:53 about when this will happen,
34:55 they can get it working safely sooner
34:57 than most people realize.
34:58 >> I think that you will see the first vehicles operating
35:02 with no one inside them moving freight in the next few years,
35:05 and then you're going to see that expanding to more freight,
35:07 more geographies, more weather over time as,
35:11 as that capability builds up.
35:12 We're talking, like, less than half a decade.
35:16 >> NARRATOR: He already has a Fortune 500 company
35:19 as a client, shipping appliances across the Southwest.
35:23 He says the sales pitch is straightforward.
35:27 >> They spend hundreds of millions of dollars a year
35:30 shipping parts around the country.
35:31 We can bring that cost in half.
35:34 And they're really excited to be able to start working with us,
35:36 both because of the potential,
35:39 the potential savings from deploying self-driving,
35:42 and also because of all the operational efficiencies
35:44 that they see, the biggest one being able to operate
35:47 24 hours a day.
35:49 So, right now, human drivers are limited to 11 hours
35:51 by federal law, and a driverless truck
35:55 obviously wouldn't have that limitation.
35:57 ♪ ♪
36:02 >> NARRATOR: The idea of a driverless truck comes up often
36:05 in discussions about artificial intelligence.
36:11 Steve Viscelli is a sociologist who drove a truck
36:14 while researching his book "The Big Rig" about the industry.
36:20 >> This is one of the most remarkable stories
36:23 in, in U.S. labor history, I think,
36:25 is, you know, the decline of, of unionized trucking.
36:30 The industry was deregulated in 1980,
36:33 and at that time, you know, truck drivers were earning
36:37 the equivalent of over $100,000 in today's dollars.
36:41 And today the typical truck driver will earn
36:45 a little over $40,000 a year.
36:50 And I think it's an important part
36:52 of the automation story, right?
36:54 Why are they so afraid of automation?
36:56 Because we've had four decades of rising inequality in wages.
37:00 And if anybody is going to take it on the chin
37:03 from automation in the trucking industry,
37:05 the, the first in line is going to be the driver,
37:07 without a doubt.
37:12 >> NARRATOR: For his research, Viscelli tracked down truckers
37:14 and their families, like Shawn and Hope Cumbee
37:17 of Beaverton, Michigan. >> Hi.
37:19 >> Hey, Hope, I'm Steve Viscelli.
37:20 >> Hi, Steve, nice to meet you. Come on in.
37:21 >> Great to meet you, too, thanks.
37:24 >> NARRATOR: And their son Charlie.
37:26 >> This is Daddy, me, Daddy, and Mommy.
37:31 >> NARRATOR: But Daddy's not here.
37:34 Shawn Cumbee's truck has broken down in Tennessee.
37:38 Hope, who drove a truck herself, knows the business well.
37:43 >> We made $150,000, right, in a year.
37:46 That sounds great, right?
37:48 That's, like, good money.
37:50 We paid $100,000 in fuel, okay?
37:53 So, right there, now I made $50,000.
37:57 But I didn't really, because, you know,
37:59 you get an oil change every month,
38:00 so that's $300 a month.
38:02 You still have to do all the maintenance.
38:04 We had a motor blow out, right?
38:06 $13,000. Right?
38:09 I know, I mean, I choke up a little just thinking about it,
38:11 because it was...
38:13 And it was 13,000, and we were off work for two weeks.
38:17 So, by the end of the year, with that $150,000,
38:19 by the end of the year, we'd made about 20...
38:22 About $22,000.
38:26 >> NARRATOR: In a truck stop in Tennessee,
38:28 Shawn has been sidelined waiting for a new part.
38:31 The garage owner is letting him stay in the truck to save money.
38:37 >> Hi, baby.
38:39 >> (on phone): Hey, how's it going?
38:41 >> It's going. Chunky-butt!
38:42 >> Hi, Daddy! >> Hi, Chunky-butt.
38:44 What're you doing? >> (talking inaudibly)
38:47 >> Believe it or not, I do it because I love it.
38:49 I mean, you know, it's in the blood.
38:51 Third-generation driver.
38:52 And my granddaddy told me a long time ago,
38:55 when I was probably 11, 12 years old, probably,
38:58 he said, "The world meets nobody halfway.
39:01 Nobody."
39:02 He said, "If you want it, you have to earn it."
39:07 And that's what I do every day.
39:09 I live by that creed.
39:11 And I've lived by that since it was told to me.
39:16 >> So, if you're down for a week in a truck,
39:18 you still have to pay your bills.
39:19 I have enough money in my checking account at all times
39:22 to pay a month's worth of bills.
39:23 That does not include my food.
39:25 That doesn't include field trips for my son's school.
39:27 My son and I just went to our yearly doctor appointment.
39:31 I took, I took money out of my son's piggy bank to pay for it,
39:36 because it's not... it's not scheduled in.
39:40 It's, it's not something that you can, you know, afford.
39:43 I mean, like, when...
39:45 (sighs): Sorry.
39:46 >> It's okay.
39:48 ♪ ♪
39:57 Have you guys ever talked about self-driving trucks?
39:59 Is he...
40:00 >> (laughing): So, kind of.
40:03 Um, I asked him once, you know.
40:05 And he laughed so hard.
40:07 He said, "No way will they ever have a truck
40:10 that can drive itself."
40:12 >> It's kind of interesting when you think about it, you know,
40:15 they're putting all this new technology into things,
40:17 but, you know, it's still man-made.
40:19 And man, you know, does make mistakes.
40:22 I really don't see it being a problem with the industry,
40:26 'cause, one, you still got to have a driver in it,
40:28 because I don't see it doing city.
40:30 I don't see it doing, you know, main things.
40:32 I don't see it backing into a dock.
40:34 I don't see the automation part, you know, doing...
40:37 maybe the box-trailer side, you know, I can see that,
40:39 but not stuff like I do.
40:41 So, I ain't really worried about the automation of trucks.
40:44 >> How near of a future is it?
40:46 >> Yeah, self-driving, um...
40:49 So, some, you know, some companies are already operating.
40:52 Embark, for instance, is one that has been doing
40:56 driverless trucks on the interstate.
40:59 And what's called exit-to-exit self-driving.
41:01 And they're currently running real freight.
41:04 >> Really? >> Yeah, on I-10.
41:07 ♪ ♪
41:10 >> (on P.A.): Shower guest 100, your shower is now ready.
41:15 >> NARRATOR: Over time, it has become harder and harder
41:18 for veteran independent drivers like the Cumbees
41:21 to make a living.
41:23 They've been replaced by younger,
41:25 less experienced drivers.
41:28 >> So, the, the trucking industry's $740 billion a year,
41:32 and, again, in, in many of these operations,
41:34 labor's a third of that cost.
41:37 By my estimate, I, you know, I think we're in the range
41:40 of 300,000 or so jobs in the foreseeable future
41:42 that could be automated to some significant extent.
41:47 ♪ ♪
41:50 >> (groans)
41:53 ♪ ♪
42:03 >> NARRATOR: The A.I. future was built with great optimism
42:06 out here in the West.
42:09 In 2018, many of the people who invented it
42:12 gathered in San Francisco to celebrate the 25th anniversary
42:16 of the industry magazine.
42:18 >> Howdy, welcome to WIRED25.
42:22 >> NARRATOR: It is a celebration, for sure,
42:24 but there's also a growing sense of caution
42:27 and even skepticism.
42:31 >> We're having a really good weekend here.
42:33 >> NARRATOR: Nick Thompson is editor-in-chief of "Wired."
42:37 >> When it started, it was very much a magazine
42:40 about what's coming and why you should be excited about it.
42:44 Optimism was the defining feature of "Wired"
42:47 for many, many years.
42:49 Or, as our slogan used to be, "Change Is Good."
42:53 And over time, it shifted a little bit.
42:55 And now it's more, "We love technology,
42:59 but let's look at some of the big issues,
43:00 and let's look at some of them critically,
43:03 and let's look at the way algorithms are changing
43:05 the way we behave, for good and for ill."
43:07 So, the whole nature of "Wired" has gone from a champion
43:12 of technological change to more of a observer
43:14 of technological change.
43:16 >> So, um, before we start...
43:18 >> NARRATOR: There are 25 speakers,
43:20 all named as icons of the last 25 years
43:23 of technological progress.
43:25 >> So, why is Apple so secretive?
43:27 >> (chuckling)
43:29 >> NARRATOR: Jony Ive, who designed Apple's iPhone.
43:31 >> It would be bizarre not to be.
43:34 >> There's this question of, like,
43:36 what are we doing here in this life, in this reality?
43:39 >> NARRATOR: Jaron Lanier, who pioneered virtual reality.
43:43 And Jeff Bezos, the founder of Amazon.
43:46 >> Amazon was a garage startup.
43:47 Now it's a very large company.
43:49 Two kids in a dorm...
43:50 >> NARRATOR: His message is,
43:52 "All will be well in the new world."
43:54 >> I guess, first of all, I remain incredibly optimistic
43:58 about technology,
43:59 and technologies always are two-sided.
44:01 But that's not new.
44:03 That's always been the case.
44:05 And, and we will figure it out.
44:07 The last thing we would ever want to do is stop the progress
44:10 of new technologies, even when they are dual-use.
44:16 >> NARRATOR: But, says Thompson, beneath the surface,
44:19 there's a worry most of them don't like to talk about.
44:22 >> There are some people in Silicon Valley who believe that,
44:26 "You just have to trust the technology.
44:29 Throughout history, there's been a complicated relationship
44:32 between humans and machines,
44:34 we've always worried about machines,
44:36 and it's always been fine.
44:38 And we don't know how A.I. will change the labor force,
44:41 but it will be okay."
44:42 So, that argument exists.
44:44 There's another argument,
44:45 which is what I think most of them believe deep down,
44:48 which is, "This is different.
44:51 We're going to have labor-force disruption
44:52 like we've never seen before.
44:55 And if that happens, will they blame us?"
44:59 >> NARRATOR: There is, however, one of the WIRED25 icons
45:02 willing to take on the issue.
45:05 Onstage, Kai-Fu Lee dispenses with one common fear.
45:09 >> Well, I think there are so many myths out there.
45:11 I think one, one myth is that
45:14 because A.I. is so good at a single task,
45:17 that one day we'll wake up, and we'll all be enslaved
45:21 or forced to plug our brains to the A.I.
45:24 But it is nowhere close to displacing humans.
45:28 >> NARRATOR: But in interviews around the event and beyond,
45:32 he takes a decidedly contrarian position on A.I. and job loss.
45:37 >> The A.I. giants want to paint the rosier picture
45:41 because they're happily making money.
45:43 So, I think they prefer not to talk about the negative side.
45:47 I believe about 50% of jobs will be
45:53 somewhat or extremely threatened by A.I.
45:56 in the next 15 years or so.
46:00 >> NARRATOR: Kai-Fu Lee also makes a great deal
46:02 of money from A.I.
46:04 What separates him from most of his colleagues
46:06 is that he's frank about its downside.
46:09 >> Yes, yes, we, we've made about 40 investments in A.I.
46:13 I think, based on these 40 investments,
46:16 most of them are not impacting human jobs.
46:20 They're creating value, making high margins,
46:21 inventing a new model.
46:24 But I could list seven or eight
46:27 that would lead to a very clear displacement of human jobs.
46:32 >> NARRATOR: He says that A.I. is coming,
46:34 whether we like it or not.
46:36 And he wants to warn society
46:38 about what he sees as inevitable.
46:41 >> You have a view which I think is different than many others,
46:43 which is that A.I. is not going to take blue-collar jobs
46:48 so quickly, but is actually going to take white-collar jobs.
46:51 >> Yeah. Well, both will happen.
46:53 A.I. will be, at the same time, a replacement for blue-collar,
46:57 white-collar jobs, and be a great symbiotic tool
47:00 for doctors, lawyers, and you, for example.
47:03 But the white-collar jobs are easier to take,
47:05 because they're a pure quantitative analytical process.
47:10 Let's say reporters, traders, telemarketing,
47:15 telesales, customer service...
47:17 >> Analysts?
47:18 >> Analysts, yes, these can all be replaced just by a software.
47:23 To do blue-collar, some of the work requires, you know,
47:26 hand-eye coordination, things that machines are not yet
47:30 good enough to do.
47:32 >> Today, there are many people who are ringing the alarm,
47:36 "Oh, my God, what are we going to do?
47:37 Half the jobs are going away."
47:39 I believe that's true, but here's the missing fact.
47:43 I've done the research on this, and if you go back 20, 30,
47:46 or 40 years ago, you will find that 50% of the jobs
47:50 that people performed back then are gone today.
47:54 You know, where are all the telephone operators,
47:56 bowling-pin setters, elevator operators?
48:00 You used to have seas of secretaries in corporations
48:04 that have now been eliminated-- travel agents.
48:06 You can just go through field after field after field.
48:08 That same pattern has recurred many times throughout history,
48:12 with each new wave of automation.
48:14 >> But I would argue that history is only trustable
48:20 if it is multiple repetitions of similar events,
48:24 not once-in-a-blue-moon occurrence.
48:28 So, over the history of many tech inventions,
48:33 most are small things.
48:34 Only maybe three are at the magnitude of A.I. revolution--
48:41 the steam, steam engine, electricity,
48:44 and the computer revolution.
48:46 I'd say everything else is too small.
48:48 And the reason I think it might be something brand-new
48:52 is that A.I. is fundamentally replacing our cognitive process
48:58 in doing a job in its significant entirety,
49:03 and it can do it dramatically better.
49:06 >> NARRATOR: This argument about job loss
49:08 in the age of A.I. was ignited six years ago
49:11 amid the gargoyles and spires of Oxford University.
49:15 Two researchers had been poring through U.S. labor statistics,
49:19 identifying jobs that could be vulnerable to A.I. automation.
49:25 >> Well, vulnerable to automation,
49:27 in the context that we discussed five years ago now,
49:30 essentially meant that those jobs are potentially automatable
49:34 over an unspecified number of years.
49:36 And the figure we came up with was 47%.
49:41 >> NARRATOR: 47%.
49:43 That number quickly traveled the world in headlines
49:46 and news bulletins.
49:47 But authors Carl Frey and Michael Osborne
49:51 offered a caution.
49:52 They can't predict how many jobs will be lost, or how quickly.
49:57 But Frey believes that there are lessons in history.
50:02 >> And what worries me the most is that there is actually
50:04 one episode that looks quite familiar to today,
50:08 which is the British Industrial Revolution,
50:12 where wages didn't grow for nine decades,
50:16 and a lot of people actually saw living standards decline
50:20 as technology progressed.
50:23 ♪ ♪
50:25 >> NARRATOR: Saginaw, Michigan, knows about decline
50:28 in living standards.
50:31 Harry Cripps, an auto worker and a local union president,
50:34 has witnessed what 40 years of automation can do to a town.
50:40 >> You know, we're one of the cities in the country that,
50:43 I think we were left behind in this recovery.
50:47 And I just... I don't know how we get on the bandwagon now.
50:54 >> NARRATOR: Once, this was the U.A.W. hall
50:57 for one local union.
50:59 Now, with falling membership, it's shared by five locals.
51:03 >> Rudy didn't get his shift.
51:05 >> NARRATOR: This day, it's the center
51:07 for a Christmas food drive.
51:09 Even in a growth economy,
51:12 unemployment here is near six percent.
51:14 Poverty in Saginaw is over 30%.
51:21 >> Our factory has about 1.9 million square feet.
51:25 Back in the '70s, that 1.9 million square feet
51:29 had about 7,500 U.A.W. automotive workers
51:32 making middle-class wage with decent benefits
51:34 and able to send their kids to college and do all the things
51:36 that the middle-class family should be able to do.
51:39 Our factory today, with automation,
51:42 would probably be about 700 United Auto Workers.
51:46 That's a dramatic change.
51:50 Lot of union brothers used to work there, buddy.
51:52 >> The TRW plant, that was unfortunate.
51:55 >> Delphi... looks like they're starting to tear it down now.
51:57 Wow.
51:59 Automations is, is definitely taking away a lot of jobs.
52:02 Robots, I don't know how they buy cars,
52:05 I don't know how they buy sandwiches,
52:07 I don't know how they go to the grocery store.
52:09 They definitely don't pay taxes, which serves the infrastructure.
52:11 So, you don't have the sheriffs and the police and the firemen,
52:15 and anybody else that supports the city is gone,
52:18 'cause there's no tax base.
52:19 Robots don't pay taxes.
52:23 >> NARRATOR: The average personal income in Saginaw
52:25 is $16,000 a year.
52:29 >> A lot of the families that I work with here in the community,
52:32 both parents are working.
52:33 They're working two jobs.
52:35 Mainly, it's the wages, you know,
52:38 people not making a decent wage to be able to support a family.
52:43 Like, back in the day, my dad even worked at the plant.
52:46 My mom stayed home, raised the children.
52:49 And that give us the opportunity to put food on the table,
52:52 and things of that nature.
52:53 And, and them times are gone.
52:56 >> If you look at this graph of what's been happening
52:57 to America since the end of World War II,
52:59 you see a line for our productivity,
53:03 and our productivity gets better over time.
53:05 It used to be the case that our pay, our income,
53:08 would increase in lockstep with those productivity increases.
53:12 The weird part about this graph is how the income has decoupled,
53:17 is not going up the same way that productivity is anymore.
53:21 >> NARRATOR: As automation has taken over,
53:24 workers are either laid off or left with less-skilled jobs
53:27 for less pay, while productivity goes up.
53:31 >> There are still plenty of factories in America.
53:33 We are a manufacturing powerhouse,
53:35 but if you go walk around an American factory,
53:37 you do not see long lines of people
53:40 doing repetitive manual labor.
53:42 You see a whole lot of automation.
53:44 If you go upstairs in that factory
53:46 and look at the payroll department,
53:47 you see one or two people looking into a screen all day.
53:51 So, the activity is still there,
53:53 but the number of jobs is very, very low,
53:56 because of automation and tech progress.
53:58 Now, dealing with that challenge,
54:01 and figuring out what the next generation
54:02 of the American middle class should be doing,
54:05 is a really important challenge,
54:07 because I am pretty confident that we are never again
54:10 going to have this large, stable, prosperous
54:13 middle class doing routine work.
54:15 ♪ ♪
54:19 >> NARRATOR: Evidence of how A.I. is likely to bring
54:21 accelerated change to the U.S. workforce can be found
54:25 not far from Saginaw.
54:27 This is the U.S. headquarters
54:29 for one of the world's largest builders of industrial robots,
54:34 a Japanese-owned company called Fanuc Robotics.
54:38 >> We've been producing robots for well over 35 years.
54:41 And you can imagine, over the years,
54:42 they've changed quite a bit.
54:45 We're utilizing the artificial intelligence
54:48 to really make the robots easier to use
54:49 and be able to handle a broader spectrum of opportunities.
54:54 We see a huge growth potential in robotics.
54:57 And we see that growth potential as being, really,
55:00 there's 90% of the market left.
55:03 >> NARRATOR: The industry says optimistically
55:05 that with that growth, they can create more jobs.
55:09 >> Even if there were five people on a job,
55:11 and we reduced that down to two people,
55:12 because we automated some level of it,
55:15 we might produce two times more parts than we did before,
55:18 because we automated it.
55:20 So now, there might be the need for two more fork-truck drivers,
55:26 or two more quality-inspection personnel.
55:29 So, although we reduce some of the people,
55:31 we grow in other areas as we produce more things.
55:36 >> When I increase productivity through automation, I lose jobs.
55:41 Jobs go away.
55:42 And I don't care what the robot manufacturers say,
55:45 you aren't replacing those ten production people
55:47 that that robot is now doing that job, with ten people.
55:51 You can increase productivity to a level to stay competitive
55:54 with the global market-- that's what they're trying to do.
55:58 ♪ ♪
56:00 >> NARRATOR: In the popular telling,
56:02 blame for widespread job loss has been aimed overseas,
56:06 at what's called offshoring.
56:08 >> We want to keep our factories here,
56:11 we want to keep our manufacturing here.
56:13 We don't want them moving to China, to Mexico, to Japan,
56:17 to India, to Vietnam.
56:21 >> NARRATOR: But it turns out most of the job loss
56:23 isn't because of offshoring.
56:26 >> There's been offshoring.
56:27 And I think offshoring is responsible for maybe 20%
56:32 of the jobs that have been lost.
56:34 I would say most of the jobs that have been lost,
56:36 despite what most Americans thinks, was due to automation
56:38 or productivity growth.
56:41 >> NARRATOR: Mike Hicks is an economist
56:43 at Ball State University in Muncie, Indiana.
56:46 He and sociologist Emily Wornell have been documenting
56:50 employment trends in Middle America.
56:52 Hicks says that automation has been a mostly silent job killer,
56:57 lowering the standard of living.
56:59 >> So, in the last 15 years, the standard of living has dropped
57:02 by 15, ten to 15 percent.
57:04 So, that's unusual in a developed world.
57:07 A one-year decline is a recession.
57:08 A 15-year decline gives an entirely different sense
57:12 about the prospects of a community.
57:14 And so that is common from the Canadian border
57:18 to the Gulf of Mexico
57:20 in the middle swath of the United States.
57:23 >> This is something we're gonna do for you guys.
57:26 These were left over from our suggestion drive that we did,
57:30 and we're going to give them each two.
57:32 >> That is awesome. >> I mean,
57:33 that is going to go a long ways, right?
57:35 I mean, that'll really help that family out during the holidays.
57:37 >> Yes, well, with the kids home from school,
57:39 the families have three meals a day that they got
57:41 to put on the table.
57:43 So, it's going to make a big difference.
57:45 So, thank you, guys. >> You're welcome.
57:47 >> This is wonderful. >> Let them know Merry Christmas
57:48 on behalf of us here at the local, okay?
57:50 >> Absolutely, you guys are just, just amazing, thank you.
57:52 And please, tell, tell all the workers how grateful
57:56 these families will be. >> We will.
57:57 >> I mean, this is not a small problem.
58:00 The need is so great.
58:02 And I can tell you that it's all races,
58:05 it's all income classes
58:08 that you might think someone might be from.
58:09 But I can tell you that when you see it,
58:11 and you deliver this type of gift to somebody
58:15 who is in need, just the gratitude that they show you
58:18 is incredible.
58:22 >> We actually know that people are at greater risk of mortality
58:26 for over 20 years after they lose their job due to,
58:30 due to no fault of their own, so something like automation
58:32 or offshoring.
58:34 They're at higher risk for cardiovascular disease,
58:36 they're at higher risk for depression and suicide.
58:42 But then with the intergenerational impacts,
58:44 we also see their children are more likely--
58:48 children of parents who have lost their job
58:50 due to automation-- are more likely to repeat a grade,
58:53 they're more likely to drop out of school,
58:55 they're more likely to be suspended from school,
58:57 and they have lower educational attainment
58:59 over their entire lifetimes.
59:03 >> It's the future of this, not the past, that scares me.
59:06 Because I think we're in the early decades
59:08 of what is a multi-decade adjustment period.
59:11 ♪ ♪
59:14 >> NARRATOR: The world is being re-imagined.
59:18 This is a supermarket.
59:20 Robots, guided by A.I., pack everything from soap powder
59:24 to cantaloupes for online consumers.
59:29 Machines that pick groceries,
59:31 machines that can also read reports, learn routines,
59:35 and comprehend are reaching deep into factories,
59:38 stores, and offices.
59:41 At a college in Goshen, Indiana,
59:43 a group of local business and political leaders come together
59:47 to try to understand the impact of A.I. and the new machines.
59:52 Molly Kinder studies the future of work
59:54 at a Washington think tank.
59:56 >> How many people have gone into a fast-food restaurant
59:58 and done a self-ordering?
60:01 Anyone, yes?
60:02 Panera, for instance, is doing this.
60:04 Cashier was my first job, and in, in, where I live,
60:08 in Washington, DC, it's actually the number-one occupation
60:10 for the greater DC region.
60:12 There are millions of people who work in cashier positions.
60:14 This is not a futuristic challenge,
60:17 this is something that's happening sooner than we think.
60:19 In the popular discussions about robots and automation and work,
60:24 almost every image is of a man on a factory floor
60:28 or a truck driver.
60:29 And yet, in our data, when we looked,
60:32 women disproportionately hold the jobs that today
60:35 are at highest risk of automation.
60:37 And that's not really being talked about,
60:40 and that's in part because women are over-represented
60:43 in some of these marginalized occupations,
60:45 like a cashier or a fast-food worker.
60:48 And also in a large numbers in clerical jobs in offices--
60:53 HR departments, payroll, finance,
60:57 a lot of that is more routine processing information,
61:00 processing paper, transferring data.
61:03 That has huge potential for automation.
61:08 A.I. is going to do some of that, software,
61:11 robots are going to do some of that.
61:12 So how many people are still working
61:14 as switchboard operators?
61:16 Probably none in this country.
61:18 >> NARRATOR: The workplace of the future will demand
61:20 different skills, and gaining them, says Molly Kinder,
61:24 will depend on who can afford them.
61:26 >> I mean it's not a good situation in the United States.
61:28 There's been some excellent research that says
61:30 that half of Americans couldn't afford
61:32 a $400 unexpected expense.
61:35 And if you want to get to a $1,000, there's even less.
61:38 So imagine you're going to go out without a month's pay,
61:41 two months' pay, a year.
61:43 Imagine you want to put savings toward a course
61:47 to, to redevelop your career.
61:49 People can't afford to take time off of work.
61:52 They don't have a cushion, so this lack of economic stability,
61:56 married with the disruptions in people's careers,
61:59 is a really toxic mix.
62:01 >> (blowing whistle)
62:03 >> NARRATOR: The new machines will penetrate every sector
62:05 of the economy: from insurance companies
62:08 to human resource departments;
62:11 from law firms to the trading floors of Wall Street.
62:14 >> Wall Street's going through it,
62:15 but every industry is going through it.
62:16 Every company is looking at all of the disruptive technologies,
62:19 could be robotics or drones or blockchain.
62:23 And whatever it is, every company's using everything
62:27 that's developed, everything that's disruptive,
62:29 in thinking about, "How do I apply that to my business
62:32 to make myself more efficient?"
62:35 And what efficiency means is, mostly,
62:37 "How do I do this with fewer workers?"
62:43 And I do think that when we look at some of the studies
62:47 about opportunity in this country,
62:50 and the inequality of opportunity,
62:53 the likelihood that you won't be able to advance
62:55 from where your parents were, I think that's, that's,
62:59 is very serious and gets to the heart of the way
63:02 we like to think of America as the land of opportunity.
63:06 >> NARRATOR: Inequality has been rising in America.
63:08 It used to be the top 1% of earners-- here in red--
63:13 owned a relatively small portion of the country's wealth.
63:16 Middle and lower earners-- in blue-- had the largest share.
63:20 Then, 15 years ago, the lines crossed.
63:24 And inequality has been increasing ever since.
63:29 >> There's many factors that are driving inequality today,
63:31 and unfortunately, artificial intelligence--
63:33 without being thoughtful about it--
63:38 is a driver for increased inequality
63:41 because it's a form of automation,
63:43 and automation is the substitution of capital
63:47 for labor.
63:49 And when you do that, the people with the capital win.
63:52 So Karl Marx was right,
63:55 it's a struggle between capital and labor,
63:58 and with artificial intelligence,
63:59 we're putting our finger on the scale on the side of capital,
64:02 and how we wish to distribute the benefits,
64:05 the economic benefits,
64:07 that that will create is going to be a major
64:09 moral consideration for society over the next several decades.
64:13 >> This is really an outgrowth of the increasing gaps
64:19 of haves and have-nots-- the wealthy getting wealthier,
64:23 the poor getting poorer.
64:24 It may not be specifically related to A.I.,
64:28 but as... but A.I. will exacerbate that.
64:30 And that, I think, will tear the society apart,
64:36 because the rich will have just too much,
64:38 and those who are have-nots will have perhaps very little way
64:44 of digging themselves out of the hole.
64:46 And with A.I. making its impact, it, it'll be worse, I think.
64:50 ♪ ♪
64:56 (crowd cheering and applauding)
65:01 >> (speaking on P.A.)
65:05 I'm here today for one main reason.
65:08 To say thank you to Ohio.
65:12 (crowd cheering and applauding)
65:17 >> I think the Trump vote was a protest.
65:20 I mean that for whatever reason,
65:22 whatever the hot button was that, you know,
65:25 that really hit home with these Americans who voted for him
65:28 were, it was a protest vote.
65:30 They didn't like the direction things were going.
65:34 (crowd booing and shouting)
65:39 I'm scared.
65:40 I'm gonna be quite honest with you, I worry about the future
65:42 of not just this country, but the, the entire globe.
65:47 If we continue to go in an automated system,
65:51 what are we going to do?
65:52 Now I've got a group of people at the top
65:54 that are making all the money and I don't have anybody
65:57 in the middle that can support a family.
66:00 So do we have to go to the point where we crash to come back?
66:05 And in this case,
66:06 the automation's already gonna be there,
66:08 so I don't know how you come back.
66:09 I'm really worried about where this,
66:11 where this leads us in the future.
66:13 ♪ ♪
66:27 >> NARRATOR: The future is largely being shaped
66:28 by a few hugely successful tech companies.
66:32 They're constantly buying up successful smaller companies
66:35 and recruiting talent.
66:37 Between the U.S. and China,
66:39 they employ a great majority of the leading A.I. researchers
66:42 and scientists.
66:46 In the course of amassing such power,
66:48 they've also become among the richest companies in the world.
66:51 >> A.I. really is the ultimate tool of wealth creation.
66:58 Think about the massive data that, you know, Facebook has
67:03 on user preferences, and how it can very smartly target
67:08 an ad that you might buy something
67:10 and get a much bigger cut that a smaller company couldn't do.
67:16 Same with Google, same with Amazon.
67:18 So it's... A.I. is a set of tools
67:23 that helps you maximize an objective function,
67:26 and that objective function initially will simply be,
67:32 make more money.
67:34 >> NARRATOR: And it is how these companies make that money,
67:36 and how their algorithms reach deeper and deeper into our work,
67:41 our daily lives, and our democracy,
67:42 that makes many people increasingly uncomfortable.
67:47 Pedro Domingos wrote the book "The Master Algorithm."
67:52 >> Everywhere you go, you generate a cloud of data.
67:55 You're trailing data, everything that you do is producing data.
67:58 And then there are computers looking at that data
67:59 that are learning, and these computers are essentially
68:02 trying to serve you better.
68:05 They're trying to personalize things to you.
68:07 They're trying to adapt the world to you.
68:08 So on the one hand, this is great,
68:10 because the world will get adapted to you
68:12 without you even having to explicitly adapt it.
68:15 There's also a danger, because the entities in the companies
68:18 that are in control of those algorithms
68:20 don't necessarily have the same goals as you,
68:22 and this is where I think people need to be aware that,
68:24 what's going on, so they can have more control over it.
68:30 >> You know, we came into this new world thinking
68:31 that we were users of social media.
68:35 It didn't occur to us that social media
68:37 was actually using us.
68:40 We thought that we were searching Google.
68:43 We had no idea that Google was searching us.
68:48 >> NARRATOR: Shoshana Zuboff is a Harvard Business School
68:50 professor emerita.
68:52 In 1988, she wrote a definitive book called
68:55 "In the Age of the Smart Machine."
68:58 For the last seven years, she has worked on a new book,
69:01 making the case that we have now entered a new phase
69:04 of the economy, which she calls "surveillance capitalism."
69:09 >> So, famously, industrial capitalism claimed nature.
69:16 Innocent rivers, and meadows, and forests, and so forth,
69:20 for the market dynamic to be reborn as real estate,
69:25 as land that could be sold and purchased.
69:27 Industrial capitalism claimed work for the market dynamic
69:32 to reborn, to be reborn as labor
69:35 that could be sold and purchased.
69:38 Now, here comes surveillance capitalism,
69:40 following this pattern, but with a dark and startling twist.
69:47 What surveillance capitalism claims is private,
69:51 human experience.
69:53 Private, human experience is claimed as a free source
69:58 of raw material, fabricated into predictions of human behavior.
70:05 And it turns out that there are a lot of businesses
70:09 that really want to know what we will do now, soon, and later.
70:17 >> NARRATOR: Like most people,
70:19 Alastair Mactaggart had know idea
70:21 about this new surveillance business,
70:23 until one evening in 2015.
70:27 >> I had a conversation with a fellow who's an engineer,
70:30 and I was just talking to him one night at a,
70:33 you know, a dinner, at a cocktail party.
70:35 And I... there had been something in the press that day
70:37 about privacy in the paper, and I remember asking him--
70:39 he worked for Google-- "What's the big deal about all,
70:41 why are people so worked up about it?"
70:44 And I thought it was gonna be one of those conversations,
70:45 like, with, you know, if you ever ask an airline pilot,
70:49 "Should I be worried about flying?"
70:50 and they say, "Oh, the most dangerous part
70:52 is coming to the airport, you know, in the car."
70:55 And he said, "Oh, you'd be horrified
70:57 if you knew how much we knew about you."
70:59 And I remember that kind of stuck in my head,
71:02 because it was not what I expected.
71:04 >> NARRATOR: That question would change his life.
71:08 A successful California real estate developer,
71:09 Mactaggart began researching the new business model.
71:15 >> What I've learned since is that their entire business
71:17 is learning as much about you as they can.
71:20 Everything about your thoughts, and your desires,
71:21 and your dreams, and who your friends are,
71:25 and what you're thinking, what your private thoughts are.
71:27 And with that, that's true power.
71:29 And so, I think... I didn't know that at the time.
71:33 That their entire business is basically mining
71:35 the data of your life.
71:37 ♪ ♪
71:39 >> NARRATOR: Shoshana Zuboff had been doing her own research.
71:43 >> You know, I'd been reading and reading and reading.
71:45 From patents, to transcripts of earnings calls,
71:48 research reports.
71:50 And, you know, just literally everything,
71:52 for years and years and years.
71:56 >> NARRATOR: Her studies included the early days
71:57 of Google, started in 1998
72:00 by two young Stanford grad students,
72:02 Sergey Brin and Larry Page.
72:06 In the beginning, they had no clear business model.
72:10 Their unofficial motto was, "Don't Be Evil."
72:13 >> Right from the start, the founders,
72:16 Larry Page and Sergey Brin, they had been very public
72:20 about their antipathy toward advertising.
72:26 Advertising would distort the internet
72:31 and it would distort and disfigure the, the purity
72:37 of any search engine, including their own.
72:41 >> Once in love with e-commerce,
72:43 Wall Street has turned its back on the dotcoms.
72:46 >> NARRATOR: Then came the dotcom crash of the early 2000s.
72:49 >> ...has left hundreds of unprofitable internet companies
72:51 begging for love and money.
72:54 >> NARRATOR: While Google had rapidly become the default
72:56 search engine for tens of millions of users,
72:58 their investors were pressuring them to make more money.
73:04 Without a new business model,
73:06 the founders knew that the young company was in danger.
73:10 >> In this state of emergency, the founders decided,
73:14 "We've simply got to find a way to save this company."
73:19 And so, parallel to this were another set of discoveries,
73:25 where it turns out that whenever we search or whenever we browse,
73:30 we're leaving behind traces-- digital traces--
73:35 of our behavior.
73:37 And those traces, back in these days,
73:39 were called digital exhaust.
73:43 >> NARRATOR: They realized how valuable this data could be
73:45 by applying machine learning algorithms
73:47 to predict users' interests.
73:52 >> What happened was, they decided to turn
73:54 to those data logs in a systematic way,
73:57 and to begin to use these surplus data
74:01 as a way to come up with fine-grained predictions
74:06 of what a user would click on, what kind of ad
74:11 a user would click on.
74:14 And inside Google, they started seeing these revenues
74:18 pile up at a startling rate.
74:22 They realized that they had to keep it secret.
74:26 They didn't want anyone to know how much money they were making,
74:28 or how they were making it.
74:31 Because users had no idea that these extra-behavioral data
74:35 that told so much about them, you know, was just out there,
74:39 and now it was being used to predict their future.
74:43 >> NARRATOR: When Google's I.P.O. took place
74:46 just a few years later,
74:47 the company had a market capitalization
74:49 of around $23 billion.
74:53 Google's stock was now as valuable as General Motors.
74:56 ♪ ♪
74:59 >> And it was only when Google went public in 2004
75:02 that the numbers were released.
75:05 And it's at that point that we learn that between the year 2000
75:10 and the year 2004, Google's revenue line increased
75:14 by 3,590%.
75:20 >> Let's talk a little about information, and search,
75:22 and how people consume it.
75:24 >> NARRATOR: By 2010, the C.E.O. of Google, Eric Schmidt,
75:27 would tell "The Atlantic" magazine...
75:29 >> ...is, we don't need you to type at all.
75:33 Because we know where you are, with your permission,
75:35 we know where you've been, with your permission.
75:39 We can more or less guess what you're thinking about.
75:41 (audience laughing) Now, is that over the line?
75:44 >> NARRATOR: Eric Schmidt and Google declined
75:45 to be interviewed for this program.
75:49 Google's new business model for predicting users' profiles
75:52 had migrated to other companies, particularly Facebook.
75:58 Roger McNamee was an early investor
76:00 and adviser to Facebook.
76:02 He's now a critic, and wrote a book about the company.
76:05 He says he's concerned about how widely companies like Facebook
76:08 and Google have been casting the net for data.
76:11 >> And then they realized, "Wait a minute,
76:13 there's all this data in the economy we don't have."
76:16 So they went to credit card processors,
76:18 and credit rating services,
76:20 and said, "We want to buy your data."
76:23 They go to health and wellness apps and say,
76:25 "Hey, you got women's menstrual cycles?
76:26 We want all that stuff."
76:28 Why are they doing that?
76:30 They're doing that because behavioral prediction
76:34 is about taking uncertainty out of life.
76:38 Advertising and marketing are all about uncertainty--
76:40 you never really know who's going to buy your product.
76:43 Until now.
76:45 We have to recognize that we gave technology a place
76:49 in our lives that it had not earned.
76:55 That essentially, because technology always made things
77:00 better in the '50s, '60s, '70s, '80s, and '90s,
77:03 we developed a sense of inevitability
77:07 that it will always make things better.
77:10 We developed a trust, and the industry earned good will
77:13 that Facebook and Google have cashed in.
77:20 >> NARRATOR: The model is simply this: provide a free service--
77:23 like Facebook-- and in exchange, you collect the data
77:26 of the millions who use it.
77:28 ♪ ♪
77:31 And every sliver of information is valuable.
77:37 >> It's not just what you post, it's that you post.
77:41 It's not just that you make plans to see your friends later.
77:44 It's whether you say, "I'll see you later,"
77:47 or, "I'll see you at 6:45."
77:51 It's not just that you talk about the things
77:54 that you have to do today.
77:56 It's whether you simply rattle them on in a,
77:59 in a rambling paragraph, or list them as bullet points.
78:04 All of these tiny signals are the behavioral surplus
78:09 that turns out to have immense predictive value.
78:13 >> NARRATOR: In 2010, Facebook experimented
78:16 with A.I.'s predictive powers in what they called
78:19 a "social contagion" experiment.
78:21 They wanted to see if, through online messaging,
78:25 they could influence real-world behavior.
78:30 The aim was to get more people to the polls
78:32 in the 2010 midterm elections.
78:34 >> Cleveland, I need you to keep on fighting.
78:38 I need you to keep on believing.
78:41 >> NARRATOR: They offered 61 million users
78:42 an "I voted" button together with faces of friends
78:45 who had voted.
78:47 A subset of users received just the button.
78:52 In the end, they claimed to have nudged 340,000 people to vote.
79:00 They would conduct other "massive contagion" experiments.
79:03 Among them, one showing that by adjusting their feeds,
79:06 they could make users happy or sad.
79:12 >> When they went to write up these findings,
79:13 they boasted about two things.
79:16 One was, "Oh, my goodness.
79:19 Now we know that we can use cues in the online environment
79:24 to change real-world behavior.
79:28 That's big news."
79:31 The second thing that they understood, and they celebrated,
79:35 was that, "We can do this in a way that bypasses
79:39 the users' awareness."
79:43 >> Private corporations have built a corporate surveillance
79:47 state without our awareness or permission.
79:52 And the systems necessary to make it work
79:55 are getting a lot better, specifically with what are known
79:58 as internet of things, smart appliances, you know,
80:01 powered by the Alexa voice recognition system,
80:04 or the Google Home system.
80:06 >> Okay, Google, play the morning playlist.
80:09 >> Okay, playing morning playlist.
80:12 ♪ ♪
80:14 >> Okay, Google, play music in all rooms.
80:16 ♪ ♪
80:18 >> And those will put the surveillance in places
80:21 we've never had it before--
80:22 living rooms, kitchens, bedrooms.
80:24 And I find all of that terrifying.
80:27 >> Okay, Google, I'm listening.
80:29 >> NARRATOR: The companies say they're not using the data
80:31 to target ads, but helping A.I. improve the user experience.
80:36 >> Alexa, turn on the fan.
80:40 (fan clicks on)
80:41 >> Okay.
80:42 >> NARRATOR: Meanwhile, they are researching
80:43 and applying for patents
80:45 to expand their reach into homes and lives.
80:48 >> Alexa, take a video.
80:51 (camera chirps)
80:52 >> The more and more that you use spoken interfaces--
80:54 so smart speakers-- they're being trained
80:57 not just to recognize who you are,
81:00 but they're starting to take baselines
81:03 and comparing changes over time.
81:09 So does your cadence increase or decrease?
81:12 Are you sneezing while you're talking?
81:15 Is your voice a little wobbly?
81:18 The purpose of doing this is to understand
81:21 more about you in real time.
81:24 So that a system could make inferences, perhaps,
81:27 like, do you have a cold?
81:30 Are you in a manic phase?
81:33 Are you feeling depressed?
81:35 So that is an extraordinary amount of information
81:38 that can be gleaned by you simply waking up
81:41 and asking your smart speaker, "What's the weather today?"
81:45 >> Alexa, what's the weather for tonight?
81:47 >> Currently, in Pasadena, it's 58 degrees with cloudy skies.
81:50 >> Inside it is, then.
81:52 Dinner!
81:54 >> The point is that this is the same
81:57 micro-behavioral targeting that is directed
82:01 toward individuals based on intimate, detailed understanding
82:08 of personalities.
82:11 So this is precisely what Cambridge Analytica did,
82:15 simply pivoting from the advertisers
82:19 to the political outcomes.
82:23 >> NARRATOR: The Cambridge Analytica scandal of 2018
82:26 engulfed Facebook, forcing Mark Zuckerberg to appear
82:29 before Congress to explain how the data
82:32 of up to 87 million Facebook users had been harvested
82:36 by a political consulting company based in the U.K.
82:42 The purpose was to target and manipulate voters
82:45 in the 2016 presidential campaign,
82:48 as well as the Brexit referendum.
82:51 Cambridge Analytica had been largely funded
82:53 by conservative hedge fund billionaire Robert Mercer.
82:58 >> And now we know that any billionaire with enough money,
83:02 who can buy the data,
83:04 buy the machine intelligence capabilities,
83:07 buy the skilled data scientists,
83:10 you know, they too can commandeer the public,
83:16 and infect and infiltrate and upend our democracy
83:23 with the same methodologies that surveillance capitalism
83:27 uses every single day.
83:32 >> We didn't take a broad enough view of our responsibility,
83:35 and that was a big mistake.
83:37 And it was my mistake, and I'm sorry.
83:40 >> NARRATOR: Zuckerberg has apologized
83:41 for numerous violations of privacy,
83:44 and his company was recently fined $5 billion
83:47 by the Federal Trade Commission.
83:50 He has said Facebook will now make data protection a priority,
83:53 and the company has suspended tens of thousands
83:56 of third-party apps from its platform
83:59 as a result of an internal investigation.
84:02 >> You know, I wish I could say that after Cambridge Analytica,
84:06 we've learned our lesson and that everything will be much
84:09 better after that, but I'm afraid the opposite is true.
84:12 In some ways, Cambridge Analytica was using tools
84:14 that were ten years old.
84:16 It was really, in some ways, old-school,
84:18 first-wave data science.
84:20 What we're looking at now, with current tools
84:22 and machine learning, is that the ability for manipulation,
84:26 both in terms of elections and opinions,
84:28 but more broadly, just how information travels,
84:31 That is a much bigger problem,
84:34 and certainly much more serious than what we faced
84:36 with Cambridge Analytica.
84:40 >> NARRATOR: A.I. pioneer Yoshua Bengio also has concerns
84:43 about how his algorithms are being used.
84:48 >> So the A.I.s are tools.
84:51 And they will serve the people who control those tools.
84:56 If those people's interests go against the, the values
85:01 of democracy, then democracy is in danger.
85:04 So I believe that scientists who contribute to science,
85:10 when that science can or will have an impact on society,
85:14 those scientists have a responsibility.
85:17 It's a little bit like the physicists of,
85:19 around the Second World War,
85:21 who rose up to tell the governments,
85:25 "Wait, nuclear power can be dangerous
85:29 and nuclear war can be really, really destructive."
85:31 And today, the equivalent of a physicist of the '40s and '50s
85:36 and '60s are, are the computer scientists
85:38 who are doing machine learning and A.I.
85:41 ♪ ♪
85:45 >> NARRATOR: One person who wanted to do something
85:46 about the dangers was not a computer scientist,
85:49 but an ordinary citizen.
85:53 Alastair Mactaggart was alarmed.
85:55 >> Voting is, for me, the most alarming one.
85:58 If less than 100,000 votes separated
86:00 the last two candidates in the last presidential election,
86:03 in three states...
86:06 >> NARRATOR: He began a solitary campaign.
86:10 >> We're talking about convincing a relatively tiny
86:12 fraction of the voters in a very...
86:14 in a handful of states to either come out and vote
86:17 or stay home.
86:18 And remember, these companies know everybody intimately.
86:21 They know who's a racist, who's a misogynist,
86:24 who's a homophobe, who's a conspiracy theorist.
86:26 They know the lazy people and the gullible people.
86:28 They have access to the greatest trove of personal information
86:31 that's ever been assembled.
86:32 They have the world's best data scientists.
86:35 And they have essentially a frictionless way
86:37 of communicating with you.
86:39 This is power.
86:43 >> NARRATOR: Mactaggart started a signature drive
86:44 for a California ballot initiative,
86:47 for a law to give consumers control of their digital data.
86:51 In all, he would spend $4 million of his own money
86:54 in an effort to rein in the goliaths of Silicon Valley.
86:58 Google, Facebook, AT&T, and Comcast
87:02 all opposed his initiative.
87:06 >> I'll tell you, I was scared. Fear.
87:09 Fear of looking like a world-class idiot.
87:12 The market cap of all the firms arrayed against me were,
87:14 was over $6 trillion.
87:19 >> NARRATOR: He needed 500,000 signatures
87:21 to get his initiative on the ballot.
87:25 He got well over 600,000.
87:27 Polls showed 80% approval for a privacy law.
87:33 That made the politicians in Sacramento pay attention.
87:37 So Mactaggart decided that because he was holding
87:40 a strong hand, it was worth negotiating with them.
87:44 >> And if AB-375 passes by tomorrow
87:46 and is signed into law by the governor,
87:48 we will withdraw the initiative.
87:49 Our deadline to do so is tomorrow at 5:00.
87:51 >> NARRATOR: At the very last moment,
87:53 a new law was rushed to the floor of the state house.
87:55 >> Everyone take their seats, please.
87:57 Mr. Secretary, please call the roll.
88:01 >> The voting starts. >> Alan, aye.
88:05 >> And the first guy, I think, was a Republican,
88:07 and he voted for it.
88:08 And everybody had said the Republicans won't vote for it
88:10 because it has this private right of action,
88:11 where consumers can sue.
88:13 And the guy in the Senate, he calls the name.
88:15 >> Aye, Roth.
88:16 Aye, Skinner.
88:17 Aye, Stern.
88:19 Aye, Stone.
88:20 >> You can see down below, and everyone went green,
88:23 and then it passed unanimously.
88:26 >> Ayes 36; No zero, the measure passes.
88:29 Immediate transmittal to the...
88:32 >> So I was blown away.
88:34 It was, it was a day I will never forget.
88:41 So in January, next year, you as a California resident
88:43 will have the right to go to any company and say,
88:45 "What have you collected on me in the last 12 years...
88:47 12 months?
88:48 What of my personal information do you have?"
88:51 So that's the first right.
88:52 It's right of... we call that the right to know.
88:54 The second is the right to say no.
88:56 And that's the right to go to any company and click a button,
88:59 on any page where they're collecting your information,
89:00 and say, "Do not sell my information."
89:03 More importantly, we require that they honor
89:06 what's called a third-party opt-out.
89:09 You will click once in your browser,
89:11 "Don't sell my information,"
89:13 and it will then send the signal to every single website
89:18 that you visit: "Don't sell this person's information."
89:21 And that's gonna have a huge impact on the spread
89:22 of your information across the internet.
89:25 >> NARRATOR: The tech companies had been publicly cautious,
89:27 but privately alarmed about regulation.
89:31 Then one tech giant came on board in support
89:34 of Mactaggart's efforts.
89:37 >> I find the reaction among other tech companies to,
89:39 at this point, be pretty much all over the place.
89:42 Some people are saying, "You're right to raise this.
89:45 These are good ideas."
89:47 Some people say, "We're not sure these are good ideas,
89:49 but you're right to raise it,"
89:50 and some people are saying, "We don't want regulation."
89:54 And so, you know, we have conversations with people
89:56 where we point out that the auto industry is better
90:00 because there are safety standards.
90:03 Pharmaceuticals, even food products,
90:05 all of these industries are better because the public
90:08 has confidence in the products,
90:11 in part because of a mixture of responsible companies
90:14 and responsible regulation.
90:19 >> NARRATOR: But the lobbyists for big tech have been working
90:21 the corridors in Washington.
90:24 They're looking for a more lenient
90:26 national privacy standard, one that could perhaps override
90:29 the California law and others like it.
90:33 But while hearings are held,
90:34 and anti-trust legislation threatened,
90:37 the problem is that A.I. has already spread so far
90:40 into our lives and work.
90:43 >> Well, it's in healthcare, it's in education,
90:46 it's in criminal justice, it's in the experience
90:48 of shopping as you walk down the street.
90:51 It has pervaded so many elements of everyday life,
90:54 and in a way that, in many cases, is completely opaque
90:57 to people.
90:59 While we can see a phone and look at it and we know that
91:00 there's some A.I. technology behind it,
91:02 many of us don't know that when we go for a job interview
91:05 and we sit down and we have a conversation,
91:07 that we're being filmed, and that our micro expressions
91:09 are being analyzed by hiring companies.
91:12 Or that if you're in the criminal justice system,
91:14 that there are risk assessment algorithms
91:16 that are deciding your "risk number,"
91:18 which could determine whether or not you receive bail or not.
91:22 These are systems which, in many cases, are hidden
91:24 in the back end of our sort of social institutions.
91:28 And so, one of the big challenges we have is,
91:29 how do we make that more apparent?
91:31 How do we make it transparent?
91:32 And how do we make it accountable?
91:36 >> For a very long time, we have felt like as humans,
91:39 as Americans, we have full agency
91:43 in determining our own futures-- what we read, what we see,
91:48 we're in charge.
91:50 What Cambridge Analytica taught us,
91:53 and what Facebook continues to teach us,
91:55 is that we don't have agency.
91:58 We're not in charge.
92:00 This is machines that are automating some of our skills,
92:05 but have made decisions about who...
92:09 Who we are.
92:12 And they're using that information to tell others
92:16 the story of us.
92:19 ♪ ♪
92:32 >> NARRATOR: In China, in the age of A.I.,
92:35 there's no doubt about who is in charge.
92:38 In an authoritarian state, social stability
92:41 is the watchword of the government.
92:43 (whistle blowing)
92:47 And artificial intelligence has increased its ability to scan
92:51 the country for signs of unrest.
92:54 (whistle blowing)
92:57 It's been projected that over 600 million cameras
93:00 will be deployed by 2020.
93:04 Here, they may be used to discourage jaywalking.
93:07 But they also serve to remind people
93:10 that the state is watching.
93:14 >> And now, there is a project called Sharp Eyes,
93:18 which is putting camera on every major street
93:22 and the corner of every village in China-- meaning everywhere.
93:29 Matching with the most advanced artificial intelligence
93:33 algorithm, which they can actually use this data,
93:36 real-time data, to pick up a face or pick up a action.
93:39 ♪ ♪
93:42 >> NARRATOR: Frequent security expos feature companies
93:44 like Megvii and its facial- recognition technology.
93:48 They show off cameras with A.I. that can track cars,
93:51 and identify individuals by face,
93:54 or just by the way they walk.
93:58 >> The place is just filled with these screens where you can see
94:02 the computers are actually reading people's faces
94:04 and trying to digest that data, and basically track
94:07 and identify who each person is.
94:09 And it's incredible to see so many,
94:11 because just two or three years ago,
94:12 we hardly saw that kind of thing.
94:14 So, a big part of it is government spending.
94:16 And so the technology's really taken off,
94:18 and a lot of companies have started to sort of glom onto
94:21 this idea that this is the future.
94:25 >> China is on its way to building
94:29 a total surveillance state.
94:32 >> NARRATOR: And this is the test lab
94:33 for the surveillance state.
94:36 Here, in the far northwest of China,
94:40 is the autonomous region of Xinjiang.
94:41 Of the 25 million people who live here,
94:45 almost half are a Muslim Turkic speaking people
94:48 called the Uighurs.
94:52 (people shouting)
94:53 In 2009, tensions with local Han Chinese led to protests
94:57 and then riots in the capital, Urumqi.
95:01 (people shouting, guns firing)
95:04 (people shouting)
95:08 As the conflict has grown, the authorities have brought in
95:11 more police, and deployed extensive
95:13 surveillance technology.
95:17 That data feeds an A.I. system that the government claims
95:20 can predict individuals prone to "terrorism"
95:24 and detect those in need of "re-education"
95:27 in scores of recently built camps.
95:30 It is a campaign that has alarmed human rights groups.
95:35 >> Chinese authorities are, without any legal basis,
95:39 arbitrarily detaining up to a million Turkic Muslims
95:42 simply on the basis of their identity.
95:44 But even outside the facilities in which these people
95:49 are being held, most of the population there
95:51 is being subjected to extraordinary levels
95:53 of high-tech surveillance such that almost no aspect of life
95:58 anymore, you know, takes place outside
96:01 the state's line of sight.
96:02 And so the kinds of behavior that's now being monitored--
96:06 you know, which language do you speak at home,
96:07 whether you're talking to your relatives
96:09 in other countries, how often you pray--
96:13 that information is now being hoovered up
96:16 and used to decide whether people should be subjected
96:19 to political re-education in these camps.
96:21 >> NARRATOR: There have been reports of torture
96:24 and deaths in the camps.
96:27 And for Uighurs on the outside,
96:28 Xinjiang has already been described
96:31 as an "open-air prison."
96:34 >> Trying to have a normal life as a Uighur
96:36 is impossible both inside and outside of China.
96:40 Just imagine, while you're on your way to work,
96:43 police subject you to scan your I.D.,
96:47 forcing you to lift your chin, while machines take your photo
96:51 and wait... you wait until you find out if you can go.
96:54 Imagine police take your phone and run data scan,
96:59 and force you to install compulsory software
97:02 allowing your phone calls and messages to be monitored.
97:07 >> NARRATOR: Nury Turkel, a lawyer and a prominent
97:09 Uighur activist, addresses a demonstration in Washington, DC.
97:14 Many among the Uighur diaspora have lost all contact
97:18 with their families back home.
97:21 Turkel warns that this dystopian deployment of new technology
97:26 is a demonstration project for authoritarian regimes
97:29 around the world.
97:31 >> They have a bar codes in somebody's home doors
97:35 to identify what kind of citizen that he is.
97:39 What we're talking about is a collective punishment
97:42 of an ethnic group.
97:45 Not only that, the Chinese government has been promoting
97:48 its methods, its technology, it is...
97:53 to other countries, namely Pakistan, Venezuela, Sudan,
97:58 and others to utilize, to squelch political resentment
98:04 or prevent a political upheaval in their various societies.
98:07 ♪ ♪
98:10 >> NARRATOR: China has a grand scheme to spread its technology
98:13 and influence around the world.
98:15 Launched in 2013, it started along the old Silk Road
98:19 out of Xinjiang, and now goes far beyond.
98:23 It's called "the Belt and Road Initiative."
98:29 >> So effectively what the Belt and Road
98:31 is is China's attempt to, via spending and investment,
98:35 project its influence all over the world.
98:37 And we've seen, you know, massive infrastructure projects
98:39 going in in places like Pakistan, in, in Venezuela,
98:43 in Ecuador, in Bolivia--
98:45 you know, all over the world, Argentina,
98:47 in America's backyard, in Africa.
98:49 Africa's been a huge place.
98:51 And what the Belt and Road ultimately does is, it attempts
98:54 to kind of create a political leverage
98:56 for the Chinese spending campaign all over the globe.
99:00 >> NARRATOR: Like Xi Jinping's 2018 visit to Senegal,
99:03 where Chinese contractors had just built a new stadium,
99:06 arranged loans for a new infrastructure development,
99:10 and, said the Foreign Ministry,
99:13 there would be help "maintaining social stability."
99:16 >> As China comes into these countries and provides
99:19 these loans, what you end up with is Chinese technology
99:21 being sold and built out by, you know, by Chinese companies
99:24 in these countries.
99:26 We've started to see it already in terms
99:27 of surveillance systems.
99:29 Not the kind of high-level A.I. stuff yet, but, you know,
99:31 lower-level, camera-based, you know,
99:32 manual sort of observation-type things all over.
99:36 You know, you see it in Cambodia, you see it in Ecuador,
99:38 you see it in Venezuela.
99:39 And what they do is, they sell a dam, sell some other stuff,
99:42 and they say, "You know, by the way, we can give you
99:44 these camera systems and, for your emergency response.
99:46 And it'll cost you $300 million,
99:49 and we'll build a ton of cameras,
99:50 and we'll build you a kind of, you know, a main center
99:52 where you have police who can watch these cameras."
99:55 And that's going in all over the world already.
99:57 ♪ ♪
100:03 >> There are 58 countries that are starting to plug in
100:06 to China's vision of artificial intelligence.
100:10 Which means effectively that China is in the process
100:15 of raising a bamboo curtain.
100:17 One that does not need to...
100:20 One that is sort of all-encompassing,
100:24 that has shared resources,
100:26 shared telecommunications systems,
100:28 shared infrastructure, shared digital systems--
100:31 even shared mobile-phone technologies--
100:35 that is, that is quickly going up all around the world
100:38 to the exclusion of us in the West.
100:41 >> Well, one of the things I worry about the most
100:43 is that the world is gonna split in two,
100:45 and that there will be a Chinese tech sector
100:47 and there will be an American tech sector.
100:48 And countries will effectively get to choose
100:51 which one they want.
100:53 It'll be kind of like the Cold War, where you decide,
100:55 "Oh, are we gonna align with the Soviet Union
100:57 or are we gonna align with the United States?"
100:59 And the Third World gets to choose this or that.
101:02 And that's not a world that's good for anybody.
101:06 >> The markets in Asia and the U.S. falling sharply
101:09 on news that a top Chinese executive
101:11 has been arrested in Canada.
101:13 Her name is Sabrina Meng.
101:14 She is the CFO of the Chinese telecom Huawei.
101:19 >> NARRATOR: News of the dramatic arrest of an important
101:21 Huawei executive was ostensibly about the company
101:24 doing business with Iran.
101:26 But it seemed to be more about American distrust
101:29 of the company's technology.
101:32 From its headquarters in southern China--
101:33 designed to look like fanciful European capitals--
101:38 Huawei is the second-biggest seller of smartphones,
101:41 and the world leader in building 5G networks,
101:45 the high-speed backbone for the age of A.I.
101:50 Huawei's C.E.O., a former officer
101:53 in the People's Liberation Army,
101:54 was defiant about the American actions.
101:57 >> (speaking Mandarin)
101:59 (translated): There's no way the U.S. can crush us.
102:02 The world needs Huawei because we are more advanced.
102:08 If the lights go out in the West, the East will still shine.
102:12 And if the North goes dark, then there is still the South.
102:16 America doesn't represent the world.
102:19 >> NARRATOR: The U.S. government fears that as Huawei supplies
102:22 countries around the world with 5G,
102:26 the Chinese government could have back-door access
102:28 to their equipment.
102:30 Recently, the C.E.O. promised complete transparency
102:34 into the company's software,
102:36 but U.S. authorities are not convinced.
102:39 >> Nothing in China exists free and clear of the party-state.
102:44 Those companies can only exist and prosper
102:48 at the sufferance of the party.
102:51 And it's made very explicit that when the party needs them,
102:55 they either have to respond or they will be dethroned.
102:58 So this is the challenge with a company like Huawei.
103:03 So Huawei, Ren Zhengfei, the head of Huawei, he can say,
103:08 "Well, we... we're just a private company and we just...
103:12 We don't take orders from the Communist Party."
103:15 Well, maybe they haven't yet.
103:18 But what the Pentagon sees,
103:20 the National Intelligence Council sees,
103:23 and what the FBI sees is, "Well, maybe not yet."
103:27 But when the call comes,
103:30 everybody knows what the company's response will be.
103:35 >> NARRATOR: The U.S. Commerce Department
103:37 has recently blacklisted eight companies
103:39 for doing business with government agencies in Xinjiang,
103:42 claiming they are aiding in the "repression"
103:45 of the Muslim minority.
103:49 Among the companies is Megvii.
103:52 They have strongly objected to the blacklist,
103:55 saying that it's "a misunderstanding of our company
103:57 and our technology."
104:01 ♪ ♪
104:04 President Xi has increased his authoritarian grip
104:07 on the country.
104:11 In 2018, he had the Chinese constitution changed
104:14 so that he could be president for life.
104:20 >> If you had asked me 20 years ago,
104:21 "What will happen to China?", I would've said,
104:23 "Well, over time, the Great Firewall will break down.
104:27 Of course, people will get access to social media,
104:29 they'll get access to Google...
104:31 Eventually, it'll become a much more democratic place,
104:35 with free expression and lots of Western values."
104:38 And the last time I checked, that has not happened.
104:41 In fact, technology's become a tool of control.
104:46 And as China has gone through this amazing period of growth
104:48 and wealth and openness in certain ways,
104:51 there has not been the democratic transformation
104:53 that I thought.
104:55 And it may turn out that, in fact,
104:57 technology is a better tool for authoritarian governments
105:00 than it is for democratic governments.
105:02 >> NARRATOR: To dominate the world in A.I.,
105:04 President Xi is depending on Chinese tech
105:08 to lead the way.
105:11 While companies like Baidu, Alibaba,
105:13 and Tencent are growing more powerful and competitive,
105:17 they're also beginning to have difficulty accessing
105:20 American technology, and are racing to develop their own.
105:27 With a continuing trade war and growing distrust,
105:31 the longtime argument for engagement
105:33 between the two countries has been losing ground.
105:38 >> I've seen more and more of my colleagues move
105:42 from a position when they thought,
105:44 "Well, if we just keep engaging China,
105:47 the lines between the two countries
105:50 will slowly converge."
105:52 You know, whether it's in economics, technology, politics.
105:56 And the transformation,
105:58 where they now think they're diverging.
106:01 So, in other words, the whole idea of engagement
106:05 is coming under question.
106:07 And that's cast an entirely different light on technology,
106:15 because if you're diverging and you're heading into a world
106:18 of antagonism-- you know, conflict, possibly,
106:23 then suddenly, technology is something
106:25 that you don't want to share.
106:27 You want to sequester,
106:30 to protect your own national interest.
106:34 And I think the tipping-point moment we are at now,
106:38 which is what is casting the whole question of things
106:41 like artificial intelligence and technological innovation
106:45 into a completely different framework,
106:47 is that if in fact China and the U.S. are in some way
106:51 fundamentally antagonistic to each other,
106:54 then we're in a completely different world.
106:59 >> NARRATOR: In the age of A.I., a new reality is emerging.
107:05 That with so much accumulated investment
107:07 and intellectual power, the world is already dominated
107:11 by just two A.I. superpowers.
107:16 That's the premise of a new book written by Kai-Fu Lee.
107:22 >> Hi, I'm Kai-Fu.
107:23 >> Hi, Dr. Lee, so nice to meet you.
107:25 >> Really nice to meet you.
107:26 Look at all these dog ears.
107:28 I love, I love that. >> You like that?
107:30 >> But I... but I don't like you didn't buy the book,
107:31 you... you borrowed it.
107:33 >> I couldn't find it! >> Oh, really?
107:35 >> Yeah! >> And, and you...
107:36 you're coming to my talk? >> Of course!
107:39 >> Oh, hi. >> I did my homework,
107:40 I'm telling you.
107:41 >> Oh, my goodness, thank you.
107:42 Laurie, can you get this gentleman a book?
107:44 (people talking in background)
107:46 >> NARRATOR: In his book and in life,
107:47 the computer scientist-cum-venture capitalist
107:50 walks a careful path.
107:52 Criticism of the Chinese government is avoided,
107:56 while capitalist success is celebrated.
107:58 >> I'm studying electrical engineering.
108:00 >> Sure, send me a resume. >> Okay, thanks.
108:03 >> NARRATOR: Now, with the rise of the two superpowers,
108:06 he wants to warn the world of what's coming.
108:09 >> Are you the new leaders?
108:11 >> If we're not the new leaders, we're pretty close.
108:14 (laughs)
108:15 Thank you very much. >> Thanks.
108:18 >> NARRATOR: "Never," he writes, "has the potential
108:20 for human flourishing been higher
108:22 or the stakes of failure greater."
108:26 ♪ ♪
108:27 >> So if one has to say who's ahead, I would say today,
108:32 China is quickly catching up.
108:34 China actually began its big push
108:38 in A.I. only two-and-a-half years ago,
108:42 when the AlphaGo-Lee Sedol match became the Sputnik moment.
108:46 >> NARRATOR: He says he believes that the two A.I. superpowers
108:49 should lead the way and work together
108:52 to make A.I. a force for good.
108:55 If we do, we may have a chance of getting it right.
108:58 >> If we do a very good job in the next 20 years,
109:00 A.I. will be viewed as an age of enlightenment.
109:04 Our children and their children will see A.I. as serendipity.
109:08 That A.I. is here to liberate us from having to do routine jobs,
109:13 and push us to do what we love,
109:15 and push us to think what it means to be human.
109:19 >> NARRATOR: But what if humans mishandle this new power?
109:23 Kai-Fu Lee understands the stakes.
109:25 After all, he invested early in Megvii,
109:28 which is now on the U.S. blacklist.
109:33 He says he's reduced his stake and doesn't speak
109:35 for the company.
109:38 Asked about the government using A.I.
109:40 for social control, he chose his words carefully.
109:44 >> Um... A.I. is a technology that can be used
109:49 for good and for evil.
109:52 So how... how do governments limit themselves in,
110:00 on the one hand, using this A.I. technology
110:04 and the database to maintain a safe environment
110:07 for its citizens, but, but not encroach
110:11 on a individual's rights and privacies?
110:14 That, I think, is also a tricky issue, I think,
110:17 for, for every country.
110:19 I think for... I think every country will be tempted
110:22 to use A.I. probably beyond the limits
110:26 to which that you and I would like the government to use.
110:35 ♪ ♪
110:40 >> NARRATOR: Emperor Yao devised the game of Go
110:43 to teach his son discipline, concentration, and balance.
110:48 Over 4,000 years later, in the age of A.I.,
110:52 those words still resonate with one of its architects.
110:56 ♪ ♪
110:58 >> So A.I. can be used in many ways that are very beneficial
111:02 for society.
111:03 But the current use of A.I. isn't necessarily aligned
111:08 with the goals of building a better society,
111:11 unfortunately.
111:12 But, but we could change that.
111:16 >> NARRATOR: In 2016, a game of Go gave us a glimpse
111:19 of the future of artificial intelligence.
111:24 Since then, it has become clear that we will need
111:27 a careful strategy to harness this new and awesome power.
111:35 >> I, I do think that democracy is threatened by the progress
111:38 of these tools unless we improve our social norms
111:42 and we increase the collective wisdom
111:46 at the planet level to, to deal with that increased power.
111:51 I'm hoping that my concerns are not founded,
111:57 but the stakes are so high
111:59 that I don't think we should take these concerns lightly.
112:06 I don't think we can play with those possibilities and just...
112:11 race ahead without thinking about the potential outcomes.
112:17 ♪ ♪
112:27 >> Go to pbs.org/frontline for more of the impact
112:31 of A.I. on jobs.
112:32 >> I believe about fifty percent of jobs will be somewhat
112:37 or extremely threatened by A.I. in the next 15 years or so.
112:41 >> And a look at the potential for racial bias
112:43 in this technology.
112:45 >> We've had issues with bias, with discrimination,
112:47 with poor system design, with errors.
112:48 >> Connect to the "Frontline" community on Facebook
112:51 and Twitter, and watch anytime on the PBS Video app
112:54 or pbs.org/frontline.
112:58 ♪ ♪
113:25 >> For more on this and other "Frontline" programs,
113:26 visit our website at pbs.org/frontline.
113:34 ♪ ♪
113:40 To order "Frontline's" "In the Age of A.I." on DVD,
113:43 visit ShopPBS or call 1-800-PLAY-PBS.
113:48 This program is also available on Amazon Prime Video.
113:57 ♪ ♪