Credit: Pixabay/CC0 Public Domain
For people with motor impairments or physical disabilities, completing daily tasks and house chores can be incredibly challenging. Recent advancements in robotics, such as brain-controlled robotic limbs, have the potential to significantly improve their quality of life.
Researchers at Hebei University of Technology and other institutes in China have developed an innovative system for controlling robotic arms that is based on augmented reality (AR) and a brain-computer interface. This system, presented in a paper published in the Journal of Neural Engineering, could enable the development of bionic or prosthetic arms that are easier for users to control.
"In recent years, with the development of robotic arms, brain science and information decoding technology, brain-controlled robotic arms have attained increasing achievements," Zhiguo Luo, one of the researchers who carried out the study, told TechXplore. "However, disadvantages like poor flexibility restrict their widespread application. We aim to promote the lightweight and practicality of brain-controlled robotic arms."
The system developed by Luo and his colleagues integrates AR technology, which allows users to view an enhanced version of their surroundings that includes digital elements, and a brain-controlled interface, with a conventional method for controlling robotic limbs known as asynchronous control. This ultimately allows users to achieve greater control over robotic arms, enhancing the accuracy and efficiency of the resulting movements.
Asynchronous control methods are inspired by the way in which the human brain operates. More specifically, they try to replicate the brain's ability to alternate between working and idle states.
"The key point of asynchronous control is to distinguish the idle state and the working state of the robotic system," Luo explained. "After a user starts operating our robotic arm system, the system is initialized to the idle state. When the control command comes to the subject's mind, the subject can switch the system to the working state via the state switching interface."
After the system created by the researchers is switched into the working state, users can simply select the control commands for the movements they wish to perform and the system transmits them to the robotic arm they are wearing. When the robotic arm receives these commands, it simply performs the desired movements or task. Once the task is completed, the system automatically goes back into an idle state.
Credit: Chen et al.
"A unique feature of our system is the successful integration of AR-BCI, asynchronous control, and an adaptive stimulus time adjustment method for data processing," Luo said. "Compared to conventional BCI systems, our system is also more flexible and easier to control."
The adaptive nature of the system created by Luo and his colleagues allows it to flexibly adjust the duration of the AR content presented to users based on a user's state while he is using the robotic arm. This can significantly reduce fatigue caused by looking at a screen or digital content. Moreover, compared to conventional brain-computer interfaces, the team's AR-enhanced system reduces constraints on the physical activity of users, allowing them to operate robotic arms with greater ease.
"Ultimately, we were able to successfully integrate AR, brain-computer interfaces, adaptive asynchronous control and a new spatial filtering algorithm to classify the SSVEP signals, which provides new ideas for the development of a brain-controlled robotic arm," Luo said. "Our approach helps to improve the practicality of brain-controlled robotic arm and accelerate the application of this technology in real life."
The researchers evaluated their system in a series of experiments and attained highly promising results. Most notably, they found that their system allows users to perform the movements they wanted using a robotic arm with an accuracy of 94.97%. In addition, the ten users who tested their system were able to select single commands for robotic arms within an average time of 2.04 seconds. Overall, these findings suggests that their system improves the efficiency with which users can control robotic arms, while also reducing their visual fatigue.
In the future, the approach proposed by this team of researchers could help to enhance the performance of both existing and newly developed robotic arms. This could facilitate the implementation of these systems both in healthcare settings and elderly care facilities, allowing patients and guests to engage in some of their daily activities independently and thus enhancing their quality of life.
So far, Luo and his colleagues only tested their system on users with no motor impairments or disabilities. However, they soon hope to also evaluate it in collaboration with elderly users or users with physical disabilities, to explore its potential and applicability further.
"We now plan to work on the following aspects to improve the system's reliability and practicability for social life," Luo added. "First, in terms of asynchronous control strategy, EOG and other physiological signals can be used to improve the asynchronous control process. Second, EEG decoding, transfer learning, and other methods can improve the model training process even further. Furthermore, in terms of the dynamic window, we could use other prediction methods to modify the system threshold in real-time."
© 2021 Science X Network