Andrew Lampinen, DeepMind: On symbolic behavior, mental time travel, and insights from psychology

February 28, 2022

RSS · Spotify · Apple Podcasts · Pocket Casts

Andrew Lampinen (Google Scholar) is a Research Scientist at DeepMind. He previously completed his Ph.D. in cognitive psychology at Stanford. In this episode, we discuss generalization and transfer learning, how to think about language and symbols, what AI can learn from psychology (and vice versa), mental time travel, and the need for more human-like tasks.

Some highlights from our conversation

“I am very skeptical of the notion of a disentangled representation as such. In particular, I think that the notion of disentanglement has to be much more specific to the context and the goal than I usually see people thinking about it.”

“Under certain training conditions, ImageNet models are biased towards using texture to classify objects much more than humans are…that’s partly because of the training procedures, like if you do like a bunch of tiny crops of images in training, that kind of biases models towards texture because it’s the only thing there…but it’s also because ImageNet is just set up with these very strong correlations in a way that maybe human experience isn’t—because we see a lot of cartoon lions as well as real lions, and we see things in more variable backgrounds, and so on. So I think if you want to generalize well to the ImageNet tests, you might actually not want to solve the problem in the same way as humans necessarily because there is real signal in those textures that humans are ignoring, slightly to their detriment.”

“We basically tried to argue that AI should write more review articles and do more meta analysis and that psychology should basically just publish more incremental papers.”

“Language as a means of compression might play a role in [making us robust to changes in our representations]. It has some nice properties for memory; it’s a relatively small thing to remember a description of something, and it’s relatively resilient to noise in a way that a continuous representation maybe isn’t. So maybe both of those properties are nice if you want to find a way of representing something in a way that you’ll remember it a long time in the future when your representations have shifted slightly.”

“Ultimately I’d like to train an agent in an environment like a human kid has. And in particular, one of the things we talked about a bunch of the symbolic behavior paper was the value of the social interactions where you come to agree on meaning about things. Kids do this all the time with creating imaginary worlds with their friends or their family, and having conversations with their parents about what things mean and what things are. And so I’d really love to train agents in environments that are more socially interactive in that way.”


Referenced in this podcast



Thanks to Tessa Hall for editing the podcast.