Matthew Wilson’s curiosity about the nature of intelligence, both biological and synthetic, dates back to his days as an electrical engineering student, but when it came time to choose a research direction, he dedicated himself to studying how large groups of neurons in the hippocampus and other brain regions represent, process, and employ memory in behavior. His discoveries including that animals replay memories of their daily activities when they sleep, that they also rehash them during rest while awake and that they even sometimes replay them backwards to retrace their steps have proven of substantial interest to AI researchers.
Last year, for example, when Matt Botvinick, head of neuroscience at the AI company DeepMind spoke at a conference at MIT, he specifically highlighted the Wilson lab’s work, including a particular Neuron paper on replay, as having a direct influence on how programmers have designed algorithms to learn from past performance. Much as it does in animals, replay helps algorithms recursively identify and reinforce what went right and what went wrong – a concept called “deep reinforcement learning.”
A couple of months after Botvinick made those remarks, DeepMind researchers co-authored a paper in Cell providing evidence that humans, too, use replay to refine and apply learning. The study not only cited four of Wilson’s papers but also three by Susumu Tonegawa, who has shown that heading into an experience, rodents will also prepare by imagining what is to come, a phenomenon dubbed “preplay.”
Indeed, citation of Picower faculty papers by studies having to do with AI is not rare. According to data from the Web of Science furnished by the MIT Libraries, through February 2020 more than 1,680 papers tagged as relevant to AI research have cited more than 200 papers authored by current Picower Institute faculty members – particularly those whose studies involve systems-level neuroscience like Wilson, Tonegawa, Earl Miller, Emery N. Brown, Mriganka Sur, and Mark Bear. The numbers don’t shed any light on how much Picower faculty have influenced AI compared to others, but they do demonstrate that their work has mattered to the field.
The tallies also show how Picower research has mattered to AI by highlighting clear themes. Among the 25 Picower papers most cited by AI-related papers, several concern Wilson’s studies of how the hippocampus and other regions encode motion through an environment, like a maze, and replay those memories. Another cluster of papers represent some of Miller’s efforts to understand how the prefrontal cortex governs cognitive functions like working memory and selective attention. A few, including the most cited of all, derive from Sur’s work to understand how the brain rewires itself based on experience to tune mental function to the demands of the world. And many highly-cited papers demonstrate that Brown’s rigorous statistical methods for finding meaningful patterns in neural activity have been valuable for engineers who seek to represent that in algorithmic code.
“A lot of the big ideas in AI were really derived from the biology, from neuroscience,” observes Wilson, whose lab continues to explore how the hippocampus and connected regions encode context, reward and action to produce goal-directed behavior. “Basic science can provide the kind of novel insight that can fuel the next wave of innovation.”
The ways in which Picower’s fundamental research has fed into AI may become of heightened interest as MIT launches and builds its new Schwarzman College of Computing, which has a strong AI focus. “Ensuring that the future of computing is shaped by insights from other disciplines” is an explicit part of its mission.
When Brown began his neuroscience research by collaborating with Wilson in the late 1990s, he did not anticipate the AI-relevance of his work. He was focused on developing mathematically principled statistical methods for more accurately decoding the neural activity of rats as they scampered through mazes and marked where they were. In a highly-cited Journal of Neuroscience paper with Wilson in 1998, for example, he showed that his methods reduced the error of estimating position based on neural activity from 30 centimeters to 8.
Brown’s many advances in decoding patterns of motion from neural signals caught the attention of the brain-computer interface field, in which engineers have sought to create prosthetics by reading out the brain’s thoughts of moving a missing or paralyzed arm, feeding those to a computer, and having the computer translate those thoughts into commands to move a cursor or a robotic arm. For this application of AI to truly help patients, it must be as accurate and quick as possible, which is why the field has cited Brown’s work.
“We have to do algorithmic research to make sure it’s optimal,” Brown says.
Brown is not only a neuroscientist and statistician but also he is an anesthesiologist at Massachusetts General Hospital. In more recent work, his lab has developed statistical methods for accurately extracting meaningful patterns in EEG measurements of brain waves from patients under general anesthesia. The lab has shown, for instance, how the waves differ in older vs. younger patients, and how they are affected by different anesthetic drugs and their doses. He’s implementing this knowledge in an AI-powered brain-computer interface of his own—one that will constantly monitor patient EEG readings to help anesthesiologists control dosing so patients can stay properly anesthetized without getting more drugs than needed.
Neither Brown nor Wilson call themselves AI researchers, of course, but they both maintain a formalized connection to the field via their affiliations with MIT’s Center for Brains Minds and Machines, a National Science Foundation-funded entity that facilitates dialogue and collaboration among researchers who study different manifestations of intelligence– real and engineered.
Sur, too, has sometimes collaborated with AI-minded colleagues. Two decades ago he showed that the brain of a developing ferret was so adaptable that if the auditory cortex were cut off from input, it would instead rewire to help process visual input. The findings so intrigued colleagues in robotics, that he was invited to participate in a conference inspired by the vision that if robots were designed to build their intelligence based on the flexibility of the developing biological brain – i.e. to mimic the brain’s “plasticity” to rewire neural connections based on experience – they might efficiently develop a flexible and general, rather than task-specific, intellect.
“The brain wires itself to process the world,” Sur said.
Out of that meeting Sur co-authored a 2001 paper in Science describing this vision of “autonomous mental development” that has been cited by about 200 AI-related papers, according to the Web of Science. Sur has continued to study developmental plasticity in the brain and to understand the mechanisms of learning and action – including the role of non-neural cells such as astrocytes. Out of that, this summer he has found a new opportunity to interact with the AI field. With colleagues in MIT’s Computer Science and Artificial Intelligence Lab he proposes to study a mechanism that might underlie the brain’s capacity to learn even when a good or bad outcome arises only after several steps have occurred over significant time. Their hypothesis is that the slow but sustained activity of astrocytes might integrate many inputs over time to help circuits recognize these multi-step cause and effect relationships.
Miller says that although his work addresses many questions of how intelligent behavior emerges in the brain from the coordination of different regions and networks, he hasn’t explicitly focused on connecting those findings with AI research. Nonetheless his work has been cited by AI-related researchers looking for inspiration from the natural operation of brain. For example, a 2001 review paper Miller co-authored on how the prefrontal cortex biases the function of other regions to ensure that activity is coordinated to achieve goals is the second most AI-cited Picower paper, according to the database, for instance by programmers considering algorithmic decision making or more basic issues of robotic control. Also among the top papers was one published in Science in 2007 that produced an explanation of volitional control over attention vs. more reflexive attention: Volition is synchronized with lower frequency brain waves emanating from the prefrontal cortex, while more reflexive attention (guided by the senses), depends on higher-frequency waves from sensory cortices. Miller continues to explore how other cognitive functions like working memory are also implemented by cortical networks.
And like many of his colleagues, Miller is no stranger to computation. He maintains many collaborations with computational neuroscientists whose software models of brain function help him analyze experimental data. The ability of computing to enhance neuroscience, not just the ability of neuroscience to enhance computing, is also woven in the new College of Computing’s DNA.
By serendipity, the College’s new building, scheduled to open in 2023, will be built on Vassar Street right next to Building 46. That could lay the literal groundwork for new collaborations.
“My goal is to figure out the brain and their goal is to create a smart computer,” Miller said. “If they see their pathway to making a smart computer by going through the biology of the brain, then they couldn’t pick a better place to land their building next to. What we are doing is highly relevant if you are interested in how the brain produces intelligence.”
Editor’s note: We thank MIT Librarian Courtney Crummett for her help with the data.. This story originally appeared in The Picower Institute's quarterly print newsletter. Subscriptions are free.