STEM-Talk

Episode 72: Peter Norvig talks about working at Google, digital privacy, fake news, killer robots and AI’s future

// Sep 11, 2018

Today’s episode features a timely interview with Google’s Director of Research, Peter Norvig.  He is also the co-author of “Artificial Intelligence: A Modern Approach,” which is in its third edition and is a leading AI textbook.

In today’s interview, we talk to Peter about fake news, trolls, self-driving cars, killer robots, the future of artificial intelligence, and a lot more.

We also talk to Peter about digital privacy. Tech companies such as Google, Facebook, Twitter and others have been facing heavy criticism recently over the way they handle people’s digital data.

In May, Europe began enforcing a new law that restricts how people’s online data is obtained and used. In June, California passed a privacy law that requires tech and information companies to share how they’re collecting people’s data and how they’re sharing that information.  At the moment, Congress is considering a federal privacy law that also covers how personal digital data is handled.

Ken and Peter have a history that goes back to their days at the NASA Ames Research Center in Silicon Valley. Ken was the center’s associate director at the time and recruited Peter to become the center’s chief of the Computational Sciences Division.

In today’s episode, we discuss:

  • How artificial intelligence has changed since the days when Peter first became a practicing AI professional. [00:19:20]
  • How AI research is now increasingly driven by commercial interests rather than government grants. [00:23:39]
  • What deep learning is and what the word “deep” means in this context. [00:27:48]
  • The philosophical questions that surround AI, such as: “What does it mean to be intelligent?” and “Can a machine be conscious?” [00:36:58]
  • Search function and privacy. [00:44:32]
  • Google’s responsibility for the content posted on their platforms. [00:50:06]
  • The problems that arise when tech companies police content. [00:51:17]
  • Peter’s thoughts about a meeting Elon Musk had with U.S. governors where he urged them to adopt AI legislation before “robots start going down the street killing people.” [00:56:18]
  • The meaning of “singularity” and whether Peter believes in it. [01:03:19]
  • Peter’s advice for listeners who are interested in going to work for Google someday. [01:12:10]

Show notes:

[00:02:15] Dawn begins the interview asking Peter about an interview he did with FORBES magazine where he said, “I don’t care so much whether what we are building is real intelligence. We know how to build real intelligence. My wife and I did it twice, although she did a lot more of the work. We don’t need to duplicate humans, that’s why I focus on creating tools to help us rather than duplicating what we already know how to do. We want humans and machines to partner and do what humans and machines couldn’t do on their own.” Dawn asks Peter to expand on this belief and how it influenced his career.

[00:03:23] Dawn asks Peter about growing up in Boston and his habit of writing the local newspaper to complain about innumeracy and the sloppy language in its science stories.

[00:04:36] Ken mentions Peter’s father was a math professor and his mother an English literature professor. While in high school, even though teachers suggested a career in journalism, Peter decided to learn programming instead. Peter talks about how he also took a class in linguistics, which led him to start thinking about using computers to process natural language.

[00:05:54] Dawn asks Peter about classes he took at Brown University that led him to start thinking about artificial intelligence.

[00:07:00] Dawn mentions that Peter went to the University of California Berkeley for his Ph.D. and asks him what motivated him to enroll in the computer science department and research AI.

[00:08:03] Dawn asks Peter about the research he did after receiving his Ph.D. and becoming an assistant professor at University of Southern California as a research faculty member.

[00:08:36] Peter talks about the work he did in various labs during the early years of his career.

[00:09:45] Peter talks about how Ken, while on leave from IHMC, recruited Peter in 1998 to become the division chief of the Computational Sciences Division at NASA Ames.

[00:11:32] Ken and Peter recall a tag-team address they made at the 1999 conference of the Association for the Advancement of Artificial Intelligence. The talk was titled, “AI and Space Exploration: Where No Machine Has Gone Before”.

[00:14:07] Dawn mentions that in 1996 a couple of Stanford students developed a search algorithm that was originally known as “Back Rub,” which eventually led to the formation of Google in 1998. Peter joined Google in 2001 and Dawns asks how that came about.

[00:16:11] Dawn asks Peter to talk about the differences in the work cultures of Google and NASA.

[00:17:41] Ken mentions that the textbook Peter co-wrote with Stuart Russell is now in its third edition since its original publication in 1995, and that it is considered one of the leading textbooks on artificial intelligence. Ken asks if Peter is considering a new version of the textbook in the face of the fast evolution of the field.

[00:19:20] Dawn comments on how AI programming itself has changed over time with the iteration of new languages, tools, and communities. She asks how things are now different for the practicing AI professional, compared to when Peter was getting started.

[00:21:28] Ken asks if Peter has any thoughts on the relationship between reality and some of the inflated expectations that arise from the current overhyping of AI that has stoked fears as well as utopian dreams.

[00:23:39] Because AI is now increasingly being driven by commercial interests rather than government research grants, Dawn asks how the field will change.

[00:25:10] Dawn asks why the look and feel of Google web searches hasn’t changed that much over the past 10 years.

[00:26:09] Dawn mentions that Google’s search engine has flourished for 20 years because of its speed, relevance, coverage, and other such measures of performance. Given that Google is still the gold standard in search, she asks how Google tests for performance.

[00:27:48] Ken mentions that about a decade ago Google was slightly disparaging about the utility of AI, but then Google started to suddenly change its tune, at least that’s how it looked from the outside. Ken comments that this seems to be because of the sudden explosion of applications of deep learning which, when applied to very large data, yield numerous state-of-the-art results in domains such as speech recognition, image recognition and language translation. Ken asks Peter to explain what deep learning is, what it does well, and what the word “deep” means in this context.

[00:36:58] Ken comments on how, back in the day, good old-fashioned AI raised many big philosophical questions. Questions ranged from “What does it mean to be intelligent?” to “Can a machine be conscious?” Many of these questions were explored in famous films such as “Blade Runner,” and “Ex Machina.” Ken and Peter talk about whether there are new big questions being raised in the context of newer forms of AI.

[00:39:41] Ken brings up how he and Pat Hayes developed an award in the ‘90s that they called the Simon Newcomb Award that recognized the most wrong-headed arguments against the possibility of intelligent machines.

[00:41:25] Ken observes that the papers at the National AI Conference, while often technically excellent, are typically statistical in nature and narrowly cast. Ken goes on to propose that this could be a sign of the field maturing, or perhaps ducking the hard and interesting questions, or a bit of both. He asks Peter for his thoughts on this.

[00:44:32] Ken brings the conversation back to Google and asks about search function and privacy. He mentions that Google provides some of the web’s most used and appreciated software, including Gmail, Docs, Drive, Calendar, and more. But given Google’s access to vast amounts of data that has led to personalized searches and advertisements, Ken asks Peter for his thoughts about the concerns regarding loss of privacy and the use of personal information.

[00:47:45] Dawn asks Peter how Google recommends news articles based on people’s recent searches. Dawn says that some people argue that sending people news articles based on their history is not such a good thing.

[00:50:06] Dawn asks Peter if Google feels as if it has any kind of responsibility to weed out fake news and international trolling. She also asks if the controversies about fake news and trolls are beginning to muddle the definition of information itself.

[00:51:17] Ken mentions that problems are arising for platform providers not so much because of legal reasons but because of ideological reasons. Providers are now deciding to police content that comes onto their site and ban one group of ideological crazy people while not addressing another equally unhinged group.  Ken asks Peter how society will construe the role of Google, Facebook and others as they work to police the content that appears on their sites.

[00:55:31] Dawn asks Peter for his take on companies such as Powerset as well as others who see natural language search, which allows people to use sentences rather than keywords, as the future of search.

[00:56:18] Dawn mentions how last year she turned the tables on Ken and interviewed him for STEM-Talk on episodes 49 and 50. In one interview she asked Ken about a New York Times story that referenced the meeting Elon Musk had with governors where Musk said that they should adopt AI legislation before “robots start going down the street killing people.” She asks Peter if he ascribes to this killer AI theory.

[00:59:03] Ken comments on how interesting he finds the sudden change in the arguments, which has evolved from pundits saying AI was provably impossible to now saying that superhuman AI represents the greatest danger ever faced by the human race.

[00:59:49] Dawn mentions that the biggest question people asked Peter during a trip to Australia last year was the impact of self-driving cars on professions like truck drivers. She asks Peter to talk about the fear that algorithms and machine learning are replacing jobs.

[01:03:19] Ken mentions that earlier in this decade Peter spoke at the Singularity Summit where he remarked that he was not a believer in what some refer to as “the singularity.” He asks Peter to explain what is generally meant by that term, and also to talk about his views on it.

[01:07:44] Dawn comments on the coming “post app era” where a new type of human, computer, and smart-phone technology will replace the need for apps. She asks if this is something that Google is working on.

[01:10:25] Dawn mentions that Peter is widely quoted as saying that he has the best job in the world, and asks what it is that he does.

[01:12:10] Dawn asks Peter to share some advice for listeners who are still in school and are thinking about going to work for Google someday.

[01:12:55] Ken mentions that Peter and his wife enjoy cycling and ends the interview by asking Peter if he has picked up any other new hobbies over the years.

Peter’s Google AI page

Peter’s Wikipedia page

Peter’s web page

Peter’s CV

Peter’s Google Scholar page

Peter’s Amazon page

Peter’s Ted Talk

Lecture by Peter: “The Science and Engineering of Online Learning”

Lecture by Ken: “On Computational Wings: The Prospects & Putative Perils of AI”

Learn more about IHMC

STEM-Talk homepage