Think of a story and – at least some of the time – it will appear.
Translating someone’s brain activity into written words may sound like a science fiction dream, but a new artificial intelligence (AI) model developed at the University of Texas at Austin has been able to achieve just that. Using only noninvasive scanning methods, the model can be trained to decode complex language from someone’s thoughts for extended periods of time.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” said study co-lead Alex Huth, an assistant professor of neuroscience and computer science, in a statement.
Other similar systems are in development elsewhere, but what sets this one apart is that participants don’t need to undergo surgery to have implants fitted, nor are they restricted to a list of words they can use.
Using technology like that seen in Open AI’s ChatGPT and Google’s Bard chatbots, the model – called a semantic decoder – is trained on hours of data obtained from an individual as they listen to podcasts whilst having their brain scanned via functional magnetic resonance imaging (fMRI). Later, with the participant’s consent, they can have their thoughts decoded while listening to a new story or imagining telling a story, and the model will generate a stream of text.
The results looked something like this:
The stories the participants were listening to are shown on the left; on the right is what the model was able to decode from their brain activity. Image credit: University of Texas at Austin
The decoder can’t synthesize the person’s thoughts word for word, but it can often capture the gist of what they’re thinking. After extensive training, it’s able to produce text that is a good, and occasionally exact, representation of the person’s thoughts about half of the time.
The study wasn’t just limited to hearing or thinking about stories. This video shows what the model was able to decode from someone’s brain activity while they were watching a movie clip with the sound turned off:
It may not be perfect, but the fact that the whole process is noninvasive is a big plus. In the future, it’s hoped that further development of technology like this could help patients who are no longer able to physically communicate via speech, such as some stroke survivors.
But if looking at this kind of tech gives you an uneasy feeling, you’re not alone. For many people, a device that can read your thoughts is more the stuff of dystopian nightmares than sci-fi fantasy.
Addressing these inevitable fears, study co-lead and doctoral student Jerry Tang said, “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them.”
For starters, there’s the practical consideration that this system has to be trained for hours before it can begin to work. “A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” Huth explained.
Beyond that, there’s a failsafe: even someone who had participated in training the model could prevent it from decoding their inner speech by thinking of something unrelated, such as animals.
Still, as the researchers continue to work to progress this technology, privacy and safety are at the forefront. “I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” said Tang. “Regulating what these devices can be used for is also very important.”
The study is published in Nature Neuroscience.
- Karlston, aum and jon_ty
- 3
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.