John McCarthy, 1927–2011

A titan of computing for 50 years whose achievements will shape the next 50

Loading the podcast player...

Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.” This is show number 72.

This past Sunday saw the death of John McCarthy, one of the pioneers of computer science and a founder of the field of artificial intelligence, a phrase he is credited with inventing. It capped a stunningly sad month that also saw the passing of Apple cofounder Steve Jobs, and Dennis Ritchie, the coinventor of Unix and the C programming language.

John McCarthy’s life bounced back and forth between the twin epicenters of computing. He was born in Boston in 1927 but grew up near Caltech, from which he got his B.S. in Mathematics. He detoured to Princeton for his Ph.D. but ended up at MIT, where he cofounded its AI lab, the world’s first, before going to Stanford in 1962 to found its AI lab. In between, he found time to invent LISP, one of the most influential programming languages ever created. McCarthy received computer science’s highest honor, the Turing Award, in 1972, the Kyoto Prize in 1988, and the National Medal of Science in 1991.

He was probably the first person to seriously consider, in 1955, the question of self-awareness in machines both as a computer science challenge and a social issue. He also seems to have been the first to consider computation as a utility, in the way that electricity is, an idea that first became time-sharing in the 1960s and lately shows itself in cloud computing. When others were focusing their machine-intelligence efforts on chess, McCarthy was working on natural language processing and what we would now call robotics. John McCarthy was also an academic, writing dozens of papers in computer science, and more than a few in mathematics, and overseeing computer science dissertations, including that of my guest today.

Ramanathan Guha is the holder of more than 20 patents, was a principal scientist at Apple, and a principal engineer at Netscape, where he created the first version of RSS, without which many of our listeners couldn’t podcast the show. He cofounded Epinions, has been a researcher at IBM Almaden Research Center, and currently works at Google. Though we normally call our guests by their first name, I’m going to follow the custom of his friends and colleagues. Guha, welcome to the podcast.

Ramanathan Guha: Thank you.

Steven Cherry: John McCarthy’s probably best known for creating the Lisp programming language. That was back in the late ’50s when programming languages were fairly new. It was originally designed as a sort of add-on to Fortran, but I think it’s fair to say Lisp was wildly different from it and all the other languages in the day. Tell us about Lisp.

Ramanathan Guha: The set of new things that Lisp introduced into the world of programming is so large that it’s almost impossible to think of programming languages without the contributions of Lisp. Everything from recondition, to recursion, to the idea of mutable data structures, to the idea of a type system, to the set of things that he introduced into programming languages and into computing is so large that it almost seems like a joke that it came back then. And pretty much everything—as Paul Graham once mentioned—everything that we have been slowly introducing into programming languages since then has been like somebody set up this trust fund of ideas, and we’re slowly able to take things off of there. We’ve still not taken everything out of there. Even things like garbage collection came into mainstream, sort of, commercial computing only in the mid-’90s. He had it back in the ’60s. And he came up with the specification, and the beauty of the specification was that an entire programming language could be specified in half a page, and it was just stunning in its beauty. And I remember Alan Kay telling me if you ever create anything in your life it needs to have the beauty of Lisp. The idea of beauty in computer science as a much more subtle thing was something that John McCarthy brought into the picture.

Steven Cherry: No other language at the time had implemented recursion, yet for Lisp it’s not only a feature, it’s the basis for the language. Recursion is a way for a function to refer to itself. Is it what makes Lisp so suitable for AI projects?

Ramanathan Guha: There’s so many different—I mean the idea of self-mutable programs, and it’s not that Lisp is suitable only for AI projects. Lisp is suitable as a model of computing and it can be used for everything. There are aspects of recursion that show up in no programming language that was done after that does not have recursion. The only programming language that is used in any extent that doesn’t have recursion is Fortran, which came before that. It was the idea of being able to represent a very, very, very, very wide variety of data structures, which was required for representing everything from chess moves, to linguistic references, to robotics plans and so on and so forth was what made Lisp so flexible and therefore usable by so many people for so many different purposes. And one of McCarthy’s points used to be that the difference between AI and say computer graphics and operating systems is that it’s all about generality. In fact his Turing Award lecture was in fact about as he called it “generality in AI,” and he tried to distinguish the pursuit of AI from the pursuit of any one specific thing, that AI is about the general. And it took him quite a few years to actually precisely articulate that he won the Turing award as I mentioned I think in ’72, but he told the ACM, “Look I’m not here today to give my Turing Award lecture. And it was only in ’86 I think that he was finally able to articulate what it was about AI that made it special, and his Turing Award lecture was called “generality in AI.” And that is what made Lisp so appropriate for AI is that here was a very basic set of data structures—the list and so on—which made it so flexible for so many different things.

Steven Cherry: Now you yourself started out doing fairly hard-core AI research and development and then the Web came along, and your work shifted towards something called the “semantic Web.” Maybe you could just tell us a little bit about the semantic Web and what’s the connection to AI?

Ramanathan Guha: Oh, sure. My work in knowledge representation when AI was in knowledge representation, which is sort of the core of what John did since about the late 60’s, and I worked on the Cyc project that he used to consult at from about ’87 to ’94–’95. And the core of the Cyc project was to nail what people still consider as one of the hardest problems, which is common sense reasoning. We have cars that drive themselves, and Siri-like things that can set up calendar appointments and so on and so forth, but we still don’t have a computer system that can do what a 6-year-old or a 7-year-old can do in terms of very, very simple common sense reasoning problems. And the goal was exactly what does it take to get a computer to do that, and the Cyc project was about that.

Steven Cherry: By an odd coincidence this very October is the month in which natural-language processing is starting to reach out to the masses isn’t it? With Apple’s new operating system for mobile phones and the new iPhone, this Siri feature.

Ramanathan Guha: Yes, it’s actually amazing, yes.

Steven Cherry: Tell us about Siri and how that sort of got its start with John McCarthy.

Ramanathan Guha: Well, a lot of these things if you trace back the roots of so many of these efforts, they either started in the MIT AI Lab or the Stanford AI Lab or the Knowledge Systems Lab, which is also at Stanford. The Knowledge Systems Lab, which used to be called a Heuristic Programming Project was at Feigenbaum’s lab, which often worked during the ’80s and ’90s quite closely with John McCarthy’s group and so on and Tom Grouper who was the, sort of, chief designer and CTO of Siri used to work at the Knowledge Systems Lab in the general area of knowledge representation and question answering and so on and so forth. Not just that. The other sort of trace back to John McCarthy is, of course, Raj Reddy’s seminal thesis on Hearsay 1, which was I believe John McCarthy’s first doctoral student, and Raj Reddy was the first speech understanding system, and then he went on to Carnegie Mellon to set up the computer science school there and so on and so forth. So a lot of these things, so many of these things—even the Sebastian [Thrun]’s self-driving car and all these things owe their—started off in some form at the Stanford AI lab in the ’60s and ’70s.

Steven Cherry: That’s so interesting. Raj Reddy was also involved in the autonomous vehicles at Carnegie Mellon.

Ramanathan Guha: Yes, yes.

Steven Cherry: Your own Ph.D. thesis concerns the use of contexts in artificial intelligence applications, and McCarthy was your advisor of course. What was it like to work with McCarthy on it?

Ramanathan Guha: Oh, it was absolutely fascinating. I had two advisors, John McCarthy and Ed Feigenbaum. Being a student of John McCarthy is something like being exposed to a supernova at close range. The range, I mean, he really, really pushed the boundaries on thinking in terms of thinking—people use the phrase there’s the cliché “thinking outside the box.” To him he didn’t understand the word box. He would start at problems with such fundamental first principles, and he kept pushing on solving the hard problems, even if it was an incomplete solution, as opposed to solving easier problems and giving more complete solutions. To him there was no problem that was outside scope. I still have this fond recollection of—we were meeting once and he used to love food, so he’d say, “Lets have our weekly meetings over lunch,” and he would treat me, which was absolutely great as a grad student. And he said, “I’ve been thinking, What would it take to get all the people in the U.S. to visit a certain event? How will you house them?” and I’m like, “W-what are we talking about John?” and he went on like, “Okay, fine, we computed that it would take—we would need to build a cube which was about a mile high a mile wide and a mile thick,” and he went on like very seriously designing this thing, and you realize that there is utterly absolutely no problem that he considered was outside his scope.

Steven Cherry: You know, I was looking at his Web page, which was sort of a blog that he started several years before there were blogs, and he was definitely an out-of-the box thinker about pretty much everything. He at one point defended Larry Summers and James Watson for their politically incorrect views, the one on gender and the other on race. He was an aggressive atheist who put the existence of God on par with werewolves. He worked at liberal universities his whole life but voted for George Bush in 2004. He was a defender of genetic engineering. Maybe you could just say a little bit more about John McCarthy, iconoclast.

Ramanathan Guha: Oh, I mean, John McCarthy, I mean, you have to understand his upbringing. His parents were, I believe, members of the Communist Party and he grew up like that, and then at some point in the mid’-60s and ’70s, after making a few trips to Russia, he decided he didn’t really like that, and so he got into sort of first-principles thinking. The other thing you have to understand about him is that he loved to argue. I mean he loved to pick an argument. He often held extreme positions, just so he could have an interesting discussion. So that said you could pick any one of his views, from voting for George Bush to his beliefs about God to his beliefs about whatever he would have, sort of, a well-justified position about it. And he was absolutely fond of—he would sometimes walk into the Stanford coffee shop, sit down with somebody, and start arguing with them. And they would have no idea who they were arguing with, but he would enjoy the argument.

Steven Cherry: Cisco, Sun Microsystems, and Google all came out of Stanford. Do you think that’s just a coincidence?

Ramanathan Guha: No, I mean, Google was a much later generation, much after, much after SAIL [Stanford Artificial Intelligence Laboratory] had ended. Cisco, Sun and to some extent SGI and a whole bunch of the PARC related work all came out of the Stanford AI lab and that was because John McCarthy ran, even though it was called the Stanford AI lab, a lot of the work that went on there was not necessarily really AI. It had the best computing resources on the West Coast, it had the smartest people, and he encouraged people to come and Les Earnest and others created an environment where so long as you were doing interesting and wonderful things you could do it there. So Andy Bechtolsheim and others built the first Sun computer there. Sandy Lerner and others built the first routers over there. And so all these things were built there and then people moved on and these companies got created and so on and so forth. It was just like probably the way Nils Bohr lab must have been in the early part of last century, except instead of being about particle physics and quantum mechanics this was about computer science, and you had all these wonderful things come out of there.

Steven Cherry: Good. Well, thank you very much for your time Guha.

Ramanathan Guha: You’re welcome.

Steven Cherry: We’ve been speaking with Ramanathan Guha of Google about the legacy of John McCarthy, inventor of the programming language LISP, cofounder of artificial intelligence, and one of the pioneers of the computer age, who died last Sunday. For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.

This interview was recorded 26 October 2011.
Audio engineer: Francesco Ferorelli

Follow us on Twitter @Spectrumpodcast

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum's audio programming is the audio version.

Advertisement
Advertisement