
Dr. Karunesh Ganguly's and team's brain-computer interface. UCSF
A reality where a paralyzed patient thinks about moving his limbs while a robotic arm imitates his intention has finally been achieved at UC San Francisco, thanks to a recently developed brain-computer interface (BCI), a device that interprets brain signals and converts them to commands for motion.
Most BCIs previously available have a two-day maximum shelf life with a possibility of disruption, however, this one astonishingly operated for a full seven months without major recalibration.
The biggest advancement comes from the AI model that this BCI is built around. It adapts to natural shifts in brain activity over time, allowing the participant to refine his imagined movements.
“This blending of learning between humans and AI is the next phase for these brain-computer interfaces,” said neurologist Karunesh Ganguly. “It’s what we need to achieve sophisticated, lifelike function.”
How the system works
The study participant, who suffered paralysis from a stroke, was implanted with small sensors on the surface of the brain. When the patient pictured moving their limbs or head, these sensors captured the brain’s activity. Over time, researchers found that while the brain’s movement patterns remained consistent in shape, their exact locations shifted slightly from day to day.
This outlines why previous BCIs failed so quickly.
In order to solve this problem, the research team developed an AI-enabled model that adjusted for day-to-day changes. For two weeks, the subject tried to visualize simple movements while the AI learned from his brain signals. When he first attempted to control a robotic arm, his movements were imprecise. To improve accuracy, he practiced using a virtual robotic arm that provided real-time feedback.
As soon as he learned how to use the virtual arm, he could very quickly transfer these skills to the real robotic arm. He was able to grab blocks with the arm, rotate them, and place them in different positions. In a more advanced task, he opened a cabinet, took out a cup, and placed it under the water dispenser.
Lasting progress and future goals
Even months later, the subject was still capable of using the robotic arm after a short 15-minute calibration session. Ganguly and his colleagues are focused on further improving the AI so that the arm movements are more fluid and faster. They also aim to deploy the system for testing in a real home environment.
For people dealing with paralysis, even the most mundane tasks like getting a drink or feeding themselves could be a challenge. “I’m very confident that we’ve learned how to build the system now, and that we can make this work,” Ganguly said.
RECOMMENDED ARTICLES
The study has been published in the journal Cell.
0COMMENT
ABOUT THE EDITOR
Srishti Gupta Srishti studied English literature at the University of Delhi and has since then realized it's not her cup of tea. She has been an editor in every space and content type imaginable, from children's books to journal articles. She enjoys popular culture, reading contemporary fiction and nonfiction, crafts, and spending time with her cats. With a keen interest in science, Srishti is particularly drawn to beats covering medicine, sustainability, gene studies, and anything biology-related.