Researchers in California have achieved significant progress with a IA system that restores natural speech to people paralyzed in real time, using their own voices, specifically severely demonstrated in the paralyaeas severely Spastrantakden.
This innovative technology, developed by UC Berkeley teams and UC San Francisco, combines cerebral computer interfaces (BCI) with advanced artificial intelligence to decode neuronal activity in an audible speech.
Compared to other recent attempts to create speeches from brain signals, this new system is an important advance.
Stay protected and informed! Get security alerts and expert technology advice: register in ‘The Cyberguy Report’ of Kurt now

AI system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)
How it works
The system uses devices such as high density electrodes matrices that record neuronal activity directly from the brain surface. It also works with microelectrodes that penetrate the surface of the brain and the non -invasive surface electromyography sensors placed in the face to measure muscle activity. These devices take advantage of the brain to measure neuronal activity, which the AI learns to transform into the sounds of the patient’s voice.
Neuroprosthesis shows neuronal data of the brain motor cortex, the area that controls speech production and the AI decodes that data in speech. According to studying the co-leader Cheol Jun Cho, neuroprosthesis intercepts signals where thought translates into articulation and, in the middle of that, motor control.
AI system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)
AI allows the paralyzed man to control the robotic arm with brain signals
Key advances
- Real -time speech synthesis: The model based on AI transmits an intelligible discourse of the brain in an almost really real time, addressing the challenge of latency in speech neuroprosis. This “transmission approach brings the same rapid capacity to decoding the speech of devices such as Alexa and Siri to neuroprotheses”, according to GoPala Anumanchipalli, co -chapter researcher of the study. The model decodes the neuronal data in increases of 80 ms, which allows the uninterrupted use of the decoder, further increasing the speed.
- Naturalist speech: The aim of restoring naturalistic speech, which allows more fluid and expressive communication.
- Personalized voice: The AI is trained using the patient’s own voice before his injury, generating audio that sounds like them. In cases where patients have no residual vocalization, researchers use a previously trained text model and voice prior to the patient’s injury to complete the missing details.
- Speed and precision: The system can in the first brain signals of decoding and the production of discourse within a second of the patient who tries to speak, a significant improvement from the delay of eight seconds in an earlier study of 2023.
What is artificial intelligence (AI)?
AI system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)
Exoesqueleto helps paralyzed people to recover independence
Overcome challenges
One of the key challenges was to map neuronal data for speech production when the patient had no residual vocalization. The researchers exceeded this through the use of a previously trained text text model and the voice prior to the patient’s injury to complete the missing details.
Get the Fox business on the fly by clicking here
AI system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)
How does the Elon Musk’s neuralink brain chip work
IMPACT AND FUTURE ADDRESSES
This technology has the potential to significantly improve the quality of life of people with paralysis and conditions such as ELA. It allows them to communicate their needs, express complex thoughts and connect with their loved ones more naturally.
“It is exciting that the latest advances of AI are great accelerated BCIS for the practical use of the real world in the near future,” said UCSF neurosurgeon, Edward Chang.
The following steps include accelerating AI processing, making the output voice more expressive and exploring ways of incorporating variations in tone, tone and volume in the synthesized speech. Researchers also aim to decode the paralinguistic characteristics of brain activity to reflect changes in tone, tone and volume.
Subscribe to Kurt’s YouTube channel to obtain fast video tips on how to work all its technological devices
Kurt’s Key Takeways
What is really surprising of this AI is that not only translates brain signals in any child or speech. He is pointing to natural speech, using the patient’s own voice. It’s like returning their voice, which changes the game. It offers new hope for effective communication and renewed connections for many people.
What role do you think the government and regulatory bodies should perform to supervise the development and use of cerebral computer interfaces? Get us knowing in Cyberguy.com/contact.
Click here to get the Fox News application
To obtain more technological tips and safety alerts, subscribe to my free Cyberguy Report newsletter when you head Cyberguy.com/newsletter.
Ask Kurt or tell us what stories you would like us to cover.
Follow Kurt in his social channels:
Answers to the questions more than cyberguy of the axis:
New Kurt:
Copyright 2025 Cyberguy.com. All rights reserved.