We've discussed going from paralysis to mobility, but what about going from loss of vocal speech to the production of automated speech by simply using your thoughts? With the use of BCI, artificial intelligence, and neural networks, this far-fetched science fiction idea might just become a reality.
The cerebral cortex, also your brain’s most highly developed part, is the outer layer of your cerebrum. It plays a large role in higher thinking and in language perception and production. Thus, by using electrodes at the cortical surface while people listen and communicate, scientists have been able to study the brain waves that give rise to speech. By associating certain brain waves with certain language patterns, scientists have been able to digitally reconstruct the waves and translate them into audible, and often understandable at that, speech. This process was accomplished using neural networks that would produce basic units of words or even simple sentences by connecting linguistics and brain activity to machine learning algorithms. On top of that, the technology does not just simply interpret the brain waves, but it also studies the complex functioning of and relationship between various brain regions involved in language.
The first model was the “Brain-to-Text” system developed at KIT and Wadsworth center, but the technology is still being refined to make speech more fluid and understandable based on simply thought alone. In such previous studies, even now, scientists have been studying brain waves associated with audible speech and then using those brain signals to produce other words and sentences through AI. However, things become infinitely more complicated when we approach the prospects of studying brain waves associated with language for patients and test subjects who are incapable of speaking, which is the eventual desired target population of this technology.
So why should we care if similar technology already exists for people who are incapable of speech? Well, while there is BCI that allows patients to use their eyes or other minute motor movements to move a computer cursor to type out words (for example, Stephen Hawking used subtle cheek movements to control his cursor), this breakthrough, if perfected, could make communication for people who are not physically able to do so so much easier. Not only will it take less effort to produce speech, but their speech will be faster as well as much more efficient, since they could potentially have control over how they want to communicate (ie- tone) rather than just what they want to communicate.
Works Cited:
http://www.sciencemag.org/news/2019/01/artificial-intelligence-turns-brain-activity-speech
https://www.thoughtco.com/anatomy-of-the-brain-cerebral-cortex-373217
https://www.teslarati.com/scientists-use-ai-neural-network-to-translate-speech-from-brain-activity/