Human Memory, Computer Memory, and Memento
The Soar cognitive architecture doesn’t look like a brain, but it functions like one
Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
Alan Turing dreamt of a computer program that could imitate a person, but we’re nowhere near that yet.
While the goal of computer science is a truly general-purpose problem solver—an artificial intelligence as open-ended and flexible as the human brain itself—the reality of computers today is a collection of specific problem solvers that get better and better within limited domains.
Deep Blue can beat the world champion at chess, but you or I could beat it in checkers. Google does a brilliant job at searching the Web for information, but it can’t answer trivia questions like Watson, the program that beat the world “Jeopardy!” champion but which can’t play checkers either.
My guest today has devoted much of his professional life toward the creation of a general problem solver. John Laird is the John L. Tishman Professor of Engineering at the University of Michigan. He’s the leading developer of Soar, S-o-a-r, which stands for state, operator, and result, and he’s the author a new book, The Soar Cognitive Architecture, published this month by MIT Press.
John, welcome to the podcast.
John Laird: Thanks, Steven.
Steven Cherry: You wrote that one of the things that makes Soar more general—and I’m going to quote here—“traditionally the locus of decision making is the selection of the next rule to fire. In Soar, all matching rules fire in parallel, and the locus of decision making is selecting the next operator.” What does that mean, and is that more brainlike?
John Laird: Well, let’s get to the brain part later, and let’s start off by what did I mean by that. What I was doing was contrasting traditional, rule-based systems, which in those systems, there’s lots of rules in the system, and the way it does reasoning is it looks through the rules and finds the rule that matches the current situation in the best way, and then it selects that and then does the actions associated with that rule. Instead, what we’re trying to do in Soar is combine lots of rules at the same time, so when it’s in a given situation, many rules will match, and instead of picking one, it will fire all of them, and instead of those doing actions, say, in the world, instead what they’re doing, the first phase of that, is proposing separate actions. And then there’ll be other rules that come along and look at what has been proposed and evaluate them and say, “Well, in this situation this operator is better than another one,” or that the expected value of this operator is very high. And then there’s a decision procedure that looks at that information that’s retrieved from those rules and selects an operator. And so that gives us the chance to bring in knowledge to make the decision about what to do next. So in traditional rule-based systems, there’s not this chance to bring in additional knowledge to decide what to do next; it’s really just matching on the conditions. And what we wanted to do was make it more flexible so that you could have knowledge that would impact the selection of the next thing to do. Now, I think that is more humanlike in that humans are able to look at a lot of different aspects of a situation in order to—before they decide what their next action to do is. It’s not just sort of a reflex of what to do, which is what you end up with [with] rule-based systems. Rule-based systems end up being very reactive; they don’t allow the system these multiple sources of knowledge. So, one of [the] things we’ve done recently is adding more memories to Soar so that it can not just look at rules to determine what next, but it can go and access these other memories as well that provide additional information about what to do next.
Steven Cherry: Maybe an example would help here. You describe in the book something you call “Well World.” This is a hypothetical environment in which there are two water wells and a shelter and a thirsty computer.
John Laird: Yes, and we just use that for some experiments on how the system could possibly learn when should it try and look back into its prior memories to help it make a decision in the current situation. So, in Well World the system would be confused if it just looked at the current situation and didn’t look at where it had been in the past. But we organized or designed Well World so that whenever it had to make a decision, it needed to ask, “Well, what had happened in the past? And how should that influence my current decision?” And that then made it possible for it to learn through experience when should it ask about what it’s seen in the past or when should it just make a decision based on what it sees right now.
Steven Cherry: Your book talks about Frogger, and I have to say that caught my eye. This is the classic video game in which a player has to maneuver his frog across a busy highway and avoid all the cars rushing past it, and some listeners will have seen the classic Seinfeld television episode in which George has to run a real-life gauntlet of crossing the street; ironically, he’s pushing an old arcade machine of Frogger. You call Frogger a, quote, “very difficult problem,” and that’s even after you narrow it down to just crossing the road once. A 6-year-old can win at Frogger. Why is it so difficult for a computer?
John Laird: Well, I don’t know about a 6-year-old winning at Frogger the first couple times. I think one of the things we’re trying to do here is start with a system not knowing very much about Frogger at all, and then through trial and error, by playing the game, trying out and finding out what works and doesn’t work, learning to play it better. And one of the things, the components of Soar, that we are illustrating in that example is what we call “mental imagery.” Most computer systems or AI systems do not have the ability to create internal images of the situations that they’ve been in in the past and use those for reasoning. And what our Frogger agent does is it imagines, “Well, if I move in this direction, will I hit one of the logs, or will I get eaten by a fish?” or whatever, and so it does that imagination, it takes that imaginative step and uses that to evaluate the situation as to whether that is going to be useful or not. And then only through experience, by either succeeding or failing, does it end up learning how to play Frogger. And you have to realize that this is a system that hasn’t played lots of video games before like a 6-year-old has; this is a system that’s learning sort of its first video game, and that makes it very challenging for it.
Steven Cherry: You mentioned memory before, and I guess how software handles memory is one of the really important things about this Soar architecture. You write that your work was in part inspired by the movie Memento. The movie came out in the year 2000; it’s about a guy who has no short-term memory at all. He forgets things minutes after they happen, so he writes himself notes, and for really important things, he tattoos himself so he can’t possibly lose the note. And I should say that the drama comes from the fact that he’s hunting for somebody he thinks killed his wife, and he has to try and keep track of all these clues. How did Memento inspire your system at all?
John Laird: Well, so—little correction, I would say, is that what he has is no ability to consolidate short-term memory into long-term memory. So he can have a memory about the current situation and look away and remember it for a little bit, but he doesn’t have those sort of medium-term memories. And so if we look at the AI systems I’ve been developing and everybody else has been developing, they don’t have that ability to just automatically save [the] history of what experiences they’ve had over time. And what the movie did for me was show, here’s a human who doesn’t have this ability that none of our AI systems have, and he’s a cognitive cripple. And he has to do all these things with his body or notes in order to try to survive in the world. And how can I expect to create an AI system that has the capabilities of humans when I’m missing this key component of human-level cognition, which is episodic memory. So that—among other things—was one of the inspirations for adding it to Soar. And I had a similar inspiration for adding what I talked about as “mental imagery,” is that when I solve problems I’m often creating images of that problem in my mind—at least that’s what it appears to me when I think about it, whereas my AI system, the original version of Soar, only could have very abstract symbolic descriptions of situations—sort of describing them in language. And that gets you so far, but there’s a lot of problems you can’t solve unless you can also do that kind of imagery, so that’s been a real inspiration for our work, which is to look at actually where people have certain deficits, and it really hurts them in their abilities in life. And so we say, “Well, do our AI systems have those same deficits?” Yes, it seems to be. So we should be looking to ask how to add those to our AI systems.
Steven Cherry: The imagery thing reminds me of the movie Inception, and the two movies have the same director, Christopher Nolan.
John Laird: My favorite director, by the way.
Steven Cherry: [laughs] No surprise. John, you say the “Jeopardy!”-playing computer Watson, which did so extraordinarily well on the show, even Watson isn’t a general-purpose problem solver in the way that Soar is, and yet the researchers there at IBM who worked on Watson, they next set their sights on medical diagnostics, and Watson was pretty quickly modified to attack that new problem area, and I gather they’re having quite a bit of success. Soar can’t win at “Jeopardy!” and it can’t do medical diagnosis. Maybe some specificity is a good thing.
John Laird: Oh, I think it’s very important. So I think having a system that can use lots and lots of what we call “domain knowledge” is going to be critical for the success of these systems. And there’s another research project that’s been going on as long as Soar called Cyc where the goal was to encode lots of very specific—well, a combination of general and specific knowledge. And I think both Cyc and Watson are examples sort of the other side of what you need in order to get intelligence. So I would in no way say that that isn’t critical, it’s just sort of tactically or strategically, I guess, in terms of what we wanted to do in terms of our research, it was to go after the more general aspects of intelligence first. And then, I think, we have to incorporate the same kind of task-specific or domain-specific knowledge that they’re including in Watson at some point. So I think maybe someday we will want to have a lot of what’s in Watson in our systems as well. The other side of it is we are just a research project that’s going along at a university, whereas Watson really had the capability of having a huge team that went and mined a lot of that knowledge, and I don’t have the capabilities of doing that right now.
Steven Cherry: Yeah, and that’s true of some other domain-specific areas of great progress, such as smart wheelchairs and self-driving cars and language translation. Do you see any particular areas where Soar would be particularly useful, or do you think some of these other systems would just do better if they built themselves on a backbone of Soar?
John Laird: Well, I think what we’re going to see is sort of hybrids. This is what I’d like to see is hybrids, where there might be a component of these systems, where Soar provides more general problem solving similar to what happens when a person gets into a new problem. But there’s also going to be access to these very smart knowledge bases, so that the system has been preloaded with that knowledge; it has that expertise and can use it so that you sort of get the best of both worlds. But I think that robotics—down the road, there’s a subfield of robotics called cognitive robotics where you want to have the robot know something about the person it’s interacting with, be able to learn through interaction with a human. I think that’s going to be an area we can have some sort of impact on. And one of the projects we’re working on is to teach robots new tasks and new language through interaction with a human. So that’s an area where we’ll see a lot of growth in the future.
Steven Cherry: It strikes me that the self-driving car could really benefit from learning from its memories in the way that you describe, but also from imagery.
John Laird: Well, I think they already have some of those components in that already. And if you look at the internal guts of those programs, they are going to be building representations that are similar or possibly even more sophisticated than the imagery systems we have. But they’re really focused on that driving task, and so there’s other components to them that are not as general as what we’re trying to do right now.
Steven Cherry: Very good. Well, John, it’s going to be an amazing world for our grandchildren to live in, and we have researchers like yourself to thank for it, so thanks for it, and thanks for joining us today.
John Laird: Well, thank you for calling me up, and I enjoyed this very much.
Steven Cherry: We’ve been speaking with computer scientist John Laird about artificial intelligence software that tackles problems in a very human way. His new book, The Soar Cognitive Architecture, is being published this month by MIT Press. For IEEE Spectrum’s “Techwise Conversations,” I’m Steven Cherry.
NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.