AI is getting better at reading minds

Think about the words swirling around in your head: that tasteless joke you wisely kept to yourself at dinner; your unspoken impression of your best friend’s new partner. Now imagine that someone could listen.

On Monday, scientists at the University of Texas at Austin took another step in that direction. In a study published in the journal Nature Neuroscience, researchers described an AI that could translate the private thoughts of human subjects by analyzing fMRI scans, which measure blood flow to different regions of the brain.

Already, researchers have developed language decoding methods to pick up the speech attempts of people who have lost the ability to speak and to allow paralyzed people to write while thinking about writing. But the new language decoder is one of the first not to rely on implants. In the study, it was able to transform a person’s imaginary speech into real speech, and when subjects were shown silent movies, it could generate relatively accurate descriptions of what was happening on the screen.

“It’s not just a linguistic stimulus,” said Alexander Huth, a neuroscientist at the university who helped lead the research. “We’re talking about meaning, something about the idea of ​​what’s going on. And the fact that this is possible is very exciting.

The study centered on three participants, who came to Dr. Huth’s lab for 16 hours over several days to listen to “The Moth” and other narrative podcasts. As they listened, an fMRI scanner recorded blood oxygenation levels in parts of their brains. The researchers then used a large language model to match patterns of brain activity to the words and phrases the participants had heard.

Large language models like OpenAI’s GPT-4 and Google’s Bard are trained on large amounts of writing to predict the next word in a sentence or phrase. In the process, the models create maps showing how the words relate to each other. A few years ago, Dr. Huth noticed that particular elements of these maps – the so-called context integrations, which capture the semantic features, or meanings, of sentences – could be used to predict how the brain lights up in response to language.

In a fundamental sense, said Shinji Nishimoto, a neuroscientist at Osaka University who was not involved in the research, “brain activity is a kind of encrypted signal, and language patterns provide ways of decipher it”.

In their study, Dr. Huth and his colleagues effectively reversed the process, using another AI to translate the participant’s fMRI images into words and sentences. The researchers tested the decoder by asking participants to listen to new recordings and then seeing how well the translation matched the actual transcription.

Almost every word was out of place in the decoded script, but the meaning of the passage was consistently preserved. Essentially, the decoders were paraphrasing.

Original transcript“I got up from the air mattress and leaned my face against the glass of the bedroom window expecting to see eyes staring at me, but instead I found only darkness .”

Decoded from brain activity: “I just kept walking to the window and opening the glass. I stood on my tiptoes and looked outside, saw nothing and looked up again , I did not see anything.”

During the fMRI exam, participants were also asked to silently imagine telling a story; then they repeated the story aloud, for reference. Here too, the decoding model captured most of the tacit version.

Participant Version: “Look for a message from my wife saying that she has changed her mind and is coming back.”

Decoded version: “To see her for some reason, I thought she would come to me and tell me that she misses me.”

Finally, subjects watched a brief silent animated film, again while undergoing an fMRI. By analyzing their brain activity, the language model could decode a rough summary of what they were looking at – perhaps their internal description of what they were looking at.

The result suggests that the AI ​​decoder was not only capturing words, but also meaning. “Language perception is an external process, while imagination is an active internal process,” Dr. Nishimoto said. “And the authors showed that the brain uses common representations through these processes.”

Greta Tuckute, a neuroscientist at the Massachusetts Institute of Technology who was not involved in the research, said that was “the high-level question.”

“Can we decode the meaning of the brain?” she continued. “In a way, they show that, yes, we can.”

This method of decoding language had limitations, noted Dr. Huth and his colleagues. For one thing, fMRI scanners are bulky and expensive. Moreover, model training is a long and tedious process, and to be effective, it must be done on individuals. When researchers tried to use a decoder trained on one person to read another’s brain activity, it failed, suggesting that each brain has unique ways of representing meaning.

Participants were also able to protect their internal monologues, shaking the decoder while thinking about other things. The AI ​​might be able to read our minds, but for now it will have to read them one by one, and with our permission.

#reading #minds

Leave a Comment