Stanford News - January 25th, 2021 - by Taylor Kubota
In the course of deciding whether to keep reading this article, you may change your mind several times. While your final choice will be obvious to an observer – you’ll continue to scroll and read, or you’ll click on another article – any internal deliberations you had along the way will most likely be inscrutable to anyone but you. That clandestine hesitation is the focus of research, published Jan. 20 in Nature, by Stanford University researchers who study how cognitive deliberations are reflected in neural activity.
These scientists and engineers developed a system that read and decoded the activity of monkeys’ brain cells while the animals were asked to identify whether an animation of moving dots was shifting slightly left or right. The system successfully revealed the monkeys’ ongoing decision-making process in real time, complete with the ebb and flow of indecision along the way.
“I was just looking at the decoded activity trace on the screen, not knowing which way the dots were moving or what the monkey was doing, and I could tell Sania [Fong], the lab manager, ‘He’s going to choose right,’ seconds before the monkey initiated the movement to report that same choice,” recalled Diogo Peixoto, a former postdoctoral scholar in neurobiology and co-lead author of the paper. “I would get it right 80 to 90 percent of the time, and that really cemented that this was working.”
In subsequent experiments, the researchers were even able to influence the monkeys’ final decisions through subliminal manipulations of the dot motion.
“Fundamentally, much of our cognition is due to ongoing neural activity that is not reflected overtly in behavior, so what’s exciting about this research is that we’ve shown that we can now identify and interpret some of these covert, internal neural states,” said study senior author William Newsome, the Harman Family Provostial Professor in the Department of Neurobiology at Stanford University School of Medicine.
“We’re opening up a window onto a world of cognition that has been opaque to science until now,” added Newsome, who is also the Vincent V.C. Woo Director of the Wu Tsai Neurosciences Institute.
One decision at a time
Neuroscience studies of decision making have generally involved estimating the average activity of populations of brain cells across hundreds of trials. But this process overlooks the intricacies of a single decision and the fact that every instance of decision making is slightly different: The myriad factors influencing whether you choose to read this article today will differ from those that would affect you if you were to make the same decision tomorrow.
“Cognition is really complex and, when you average across a bunch of trials, you miss important details about how we come to our perceptions and how we make our choices,” said Jessica Verhein, MD/PhD student in neuroscience and co-lead author of the paper.
For these experiments, the monkeys were outfitted with a neural implant about the size of a pinky fingernail that reported the activity of 100 to 200 individual neurons every 10 milliseconds as they were shown digital dots parading on a screen. The researchers placed this implant in the dorsal premotor cortex and the primary motor cortex because, in previous research, they found that neural signals from these brain areas convey the animals’ decisions and their confidence in those decisions.
Each video of moving dots was unique and lasted less than two seconds, and the monkeys reported their decisions about whether the dots were moving right or left only when prompted – a correct answer given at the correct time earned a juice reward. The monkeys signaled their choice clearly, by pressing a right or left button on the display.
Inside the monkeys’ brains, however, the decision process was less obvious. Neurons communicate through rapid bursts of noisy electrical signals, which occur alongside a flurry of other activity in the brain. But Peixoto was able to predict the monkeys’ choices easily, in part because the activity measurements he saw were first fed through a signal processing and decoding pipeline based on years of work by the lab of Krishna Shenoy, the Hong Seh and Vivian W. M. Lim Professor in the School of Engineering and a professor, by courtesy, of neurobiology and of bioengineering, and a Howard Hughes Medical Institute Investigator.
Shenoy’s team had been using their real-time neural decoding technique for other purposes. “We are always trying to help people with paralysis by reading out their intentions. For example, they can think about how they want to move their arms and then that intention is run through the decoder to move a computer cursor on the screen to type out messages,” said Shenoy, who is co-author of the paper. “So, we’re constantly measuring neural activity, decoding it millisecond by millisecond, and then rapidly acting on this information accordingly.”
In this particular study, instead of predicting the immediate movement of the arm, the researchers wanted to predict the intention about an upcoming choice as reported by an arm movement – which required a new algorithm. Inspired by the work of Roozbeh Kiani, a former postdoctoral scholar in the Newsome lab, Peixoto and colleagues perfected an algorithm that takes in the noisy signals from groups of neurons in the dorsal premotor cortex and the primary motor cortex and reinterprets them as a “decision variable.” This variable describes the activity happening in the brain preceding a decision to move.
“With this algorithm, we can decode the ultimate decision of the of the monkey way before he moves his finger, let alone his arm,” said Peixoto.
Three experiments
The researchers speculated that more positive values of the decision variable indicated increased confidence by the monkey that the dots were moving right, whereas more negative values indicated confidence that the dots were shifting left. To test this hypothesis, they conducted two experiments: one where they would halt the test as soon as the decision variable hit a certain threshold and another where they stopped it when the variable seemed to indicate a sharp reversal of the monkey’s decision.
During the first experiments, the researchers stopped the tests at five randomly chosen levels and, at the highest positive or negative decision variable levels, the variable predicted the monkey’s final decision with about 98 percent accuracy. Predictions in the second experiment, in which the monkey had likely undergone a change of mind, were almost as accurate.
In advance of the third experiment, the researchers checked how many dots they could add during the test before the monkey became distracted by the change in the stimulus. Then, in the experiment, the researchers added dots below the noticeable threshold to see if it would sway the monkey’s decision subliminally. And, even though the new dots were very subtle, they did sometimes bias the monkey’s choices toward whatever direction they were moving. The influence of the new dots was stronger if they were added early in the trial and at any point where the monkey’s decision variable was low – which indicates a weak level of certainty.
“This last experiment, led by Jessie [Verhein], really allowed us to rule out some of the common models of decision making,” said Newsome. According to one such model, people and animals make decisions based on the cumulative sum of evidence during a trial. But if this were true, then the bias the researchers introduced with the new dots should have had the same effect no matter when it was introduced. Instead, the results seemed to support an alternative model, which states that if a subject has enough confidence in a decision building in their mind, or has spent too long deliberating, they are less inclined to consider new evidence.
New questions, new opportunities
Already, Shenoy’s lab is repeating these experiments with human participants with neural dysfunctions who use these same neural implants. Due to differences between human and nonhuman primate brains, the results could be surprising.
Potential applications of this system beyond the study of decision making include investigations of visual attention, working memory or emotion. The researchers believe that their key technological advance – monitoring and interpreting covert cognitive states through real-time neural recordings – should prove valuable for cognitive neuroscience in general, and they are excited to see how other researchers build on their work.
“The hope is that this research captures some undergraduate’s or new graduate student’s interest and they get involved in these questions and carry the ball forward for the next 40 years,” said Shenoy.
Stanford co-authors include former postdoctoral scholars Roozbeh Kiani (now at New York University), Jonathan C. Kao (now at the University of California, Los Angeles) and Chand Chandrasekaran (now at Boston University); Paul Nuyujukian, assistant professor of bioengineering and of neurosurgery; previous lab manager Sania Fong and researcher Julian Brown (now at UCSF); and Stephen I. Ryu, adjunct professor of electrical engineering (also head of neurosurgery at the Palo Alto Medical Foundation). Newsome, Nuyujukian and Shenoy are also members of Stanford Bio-X and the Wu Tsai Neurosciences Institute.
This research was funded by the Champalimaud Foundation, Portugal; Howard Hughes Medical Institute; National Institutes of Health via the Stanford Medical Scientist Training Program; Simons Foundation Collaboration on the Global Brain; Pew Scholarship in Biomedical Sciences; National Institutes of Health (including a Director’s Pioneer Award); McKnight Scholars Award; National Science Foundation; National Institute on Deafness and Other Communication Disorders; National Institute of Neurological Disorders and Stroke; Defense Advanced Research Projects Agency – Biological Technologies Office (NeuroFAST Award); and Office of Naval Research.
To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.