Jump to content
  • Single brain implant restores bilingual communication to paralyzed man

    Karlston

    • 400 views
    • 6 minutes
     Share


    • 400 views
    • 6 minutes

    Tracking syllables of words lets English and Spanish training assist each other.

    If things ultimately work out as hoped, brain implants will ultimately restore communication for those who have become paralyzed due to injury or disease. But we're a long way from that future, and the implants are currently limited to testing in clinical trials.

     

    One of those clinical trials, based at the University of California, San Francisco, has now inadvertently revealed something about how the brain handles language, because one of the patients enrolled in the trial was bilingual, using English and Spanish. By tracking activity in the area of the brain where the intention to speak gets translated into control over the vocal tract, researchers found that both languages produce consistent signals in this area, so training the system to pick up English phrases would help improve its recognition of Spanish.

    Making some noise

    Understanding bilingualism is obviously useful for understanding how the brain handles language in general. The new paper describing the work also points out that restoring communications in multiple languages should be a goal for restoring communications to people. Bilingual people will often change languages based on different social situations or sometimes do so within a sentence in order to express themselves more clearly. They often describe bilingual abilities as a key component of their personalities.

     

    So, if we really want to restore communication to people, giving them access to all the languages they speak should be a part of it.

     

    The new work is designed to make that more likely. Part of a clinical trial called BRAVO (brain-computer interface for restoration of arm and voice), it involved placing relatively simple implants (128 electrodes) into the motor region of the brain—the part that translates intentions to perform actions into the signals needed to trigger muscles to execute them.

     

    In terms of speech, this means the neurons needed to convert the desire to say a word into the muscle activity needed to control the mouth and tongue, expel sufficient breath, and tension the vocal cords. This is downstream of the portion of the brain where word choices are made, and English and Spanish presumably differ (which is in turn downstream of where meaning is sorted out, where the two languages might overlap).

     

    Hypothetically, if portions of words sound sufficiently similar in terms of how they sound, the muscle control needed to produce that sound would also be similar. So, by tracking neural activity here, they should be able to handle both languages and even detect overlaps between them.

    Leveraging language

    The process of detecting these signals is rather complex, given that neural activity looks like a noisy series of "spikes," or bursts of voltage changes. Translating those to specific meanings is generally handled by AI systems that are trained to associate certain patterns of activity with specific information (whether that information is "I want to say cat" or "I've seen a cat").

     

    So, a lot of work here involved training the software portion of the system to recognize when the participant with the implant wanted to say specific words. This involved him imagining speaking them and the software knowing the word he intended to speak. Using this, the researchers trained the system to recognize 50 English words, 50 Spanish words, and a few that were identical in both languages. Because of things like verb tenses, this worked out to be 178 distinct words.

     

    On its own, this wasn't especially effective, with word error rates in the area of 70 percent. But we already have neural networks trained to put words into normal sentences, so the researchers piped this output into GPT2, which identified and ranked likely sentences. That got the per-word error rate down to below 15 percent.

     

    Once the system was trained, the participant could use it to participate in conversations and switch languages by choice. The system worked for at least 40 days before the software needed to be recalibrated through additional training, which is better than the reported performance of many other systems.

    Word production: They’re all the same

    What did this tell us about bilingualism? For one, it supports the hypothesis that this area of the brain is specialized in making sounds and doesn't care about the language they're needed for. While the system would perform better if you told it what language was being spoken first, it did far better than random guessing if it had to figure out the language on its own. And the electrical signals picked up by the implant showed no sign of language specialization, with the authors concluding that there are "no language-specific electrodes."

     

    The researchers used the data obtained during the system's training to test aspects of its performance, obtaining results that are also consistent with this idea. For example, pre-training the system on nothing but English words dramatically cut down on the time needed to train it on Spanish afterward, consistent with the two sharing similar activity profiles. The researchers also identified which electrodes were critical for interpreting specific syllables and found that similar patterns of electrode use were best explained by similar-sounding syllables.

     

    "This provides compelling evidence that a shared syllable representation can allow data collected in one language to be repurposed for a second language," they conclude.

     

    Note the use of "can" and not "will" in that sentence, though. As noted above, bilingual speakers will sometimes switch language mid-sentence, which the current system isn't prepared to handle. And there are lots of languages with features not found in English, such as the clicks of some African languages or the centrality of intonation in Mandarin.

     

    Also, note that the system was only trained on a very limited vocabulary. It's likely that this training would help if the vocabulary is expanded, as many new words would share syllables with the existing ones. But ultimately, expanding the range of potential sentences will eventually make it harder for the GPT component to rank potential choices.

     

    Despite the limitations, it's a fair bet that the participant in this study, identified only by his nickname of Pancho, is happy to be able to communicate at all.

     

    Scientifically, the work largely confirms something you might expect: The portions of the brain that control the muscles needed to make the noises we associate with language aren't especially picky about which language they're handling. Still, it's always a good thing to have evidence in support of a hypothesis, and it's tough to imagine how we'd have gotten this evidence otherwise.

     

    Nature Biomedical Engineering, 2024. DOI: 10.1038/s41551-024-01207-5  (About DOIs).

     

    Source


    User Feedback

    Recommended Comments

    There are no comments to display.



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...