QUT postdoctoral research fellow Somayeh Hussaini. QUT
All animals, big and small, are skilled navigators. They effortlessly traverse diverse terrains, from dense forests to open deserts. Researchers at Queensland University of Technology (QUT) have unraveled the secrets of animal brains’ navigation system to build smarter robots.
In particular, they turned to the brains of insects and animals to gain insights into the development of energy-efficient robotic navigation systems.
Led by Somayeh Hussaini, a team of researchers has developed a novel navigation system that mimics the way animal brains process information.
“Animals are remarkably adept at navigating large, dynamic environments with amazing efficiency and robustness,” said Tobias Fischer of the QUT Center for Robotics.
“This work is a step towards the goal of biologically inspired navigation systems that could one day compete with or even surpass today’s more conventional approaches,” Fischer added.
Advanced algorithm of animal brains
Even with recent technological breakthroughs, robots still fall short when it comes to navigating complex, real-world environments.
Moreover, they often depend on AI systems that are energy-hungry and computationally demanding to train.
The growing adoption of robots across different sectors requires the development of advanced navigation systems.
The QUT team’s approach is based on Spiking Neural Networks (SNNs) — a “place recognition algorithm.” It offers a more efficient and robust solution, which could help solve these existing problems.
“SNNs are artificial neural networks that mimic how biological brains process information using brief, discrete signals, much like how neurons in animal brains communicate,” Hussaini said.
Hussaini further explained: “These networks are particularly well-suited for neuromorphic hardware—specialized computer hardware that mimics biological neural systems—enabling faster processing and significantly reduced energy consumption.”
Suitable in space exploration
This new system leverages the approach of “neural network modules” for place recognition from visual inputs.
These modules are not standalone; they work together as an ensemble. An ensemble is a group of multiple models that collectively make decisions.
The navigation system gains several advantages by combining multiple spiking neural networks into an ensemble.
For instance, it enhances the system’s ability to recognize places under varying conditions, such as changes in lighting, weather, or the presence of occlusions.
“Using sequences of images instead of single images enabled an improvement of 41% in place recognition accuracy, allowing the system to adapt to appearance changes over time and across different seasons and weather conditions,” said professor Michael Milford.
The system was successfully tested on a low-powered robot, which proved its practicality for energy-efficient applications.
By processing information in short, discrete bursts, SNNs can significantly reduce computational costs. This makes them ideal for energy-constrained robots, such as those used in space exploration or disaster relief.
RECOMMENDED ARTICLES
“This work can help pave the way for more efficient and reliable navigation systems for autonomous robots in energy-constrained environments. Particularly exciting opportunities include domains like space exploration and disaster recovery, where optimizing energy efficiency and reducing response times are critical,” Hussaini said in the press release.
The findings were published in the journal IEEE Transactions on Robotics.