
Dr Adam Hines, with his 'green' robot. l/r- Dr Tobias Fischer, Dr Adam Hines and Professor Michael Milford.
QUT Centre for Robotics researchers have developed a new robot navigation system.
Locational Encoding with Neuromorphic Systems (LENS) is set to transform how autonomous robots operate. At its core, LENS is inspired by the most efficient navigation system we know: the human brain.
Interestingly, this brain-inspired system operates on less than 10% of the energy of conventional systems.
“To run these neuromorphic systems, we designed specialised algorithms that learn more like humans do, processing information in the form of electrical spikes, similar to the signals used by real neurons,” said Dr Adam Hines, first author and neuroscientist.
Neuromorphic computing
For robots operating in the real world, like search and rescue missions, deep-sea exploration, or even ventures into space, energy is everything.
Standard navigation systems gobble up power, severely limiting how long and far these machines can go.
The system is based on neuromorphic computing, which cuts visual localization energy needs by up to 99%. This property can allow robots to operate much longer and travel further on limited power.
“We have known neuromorphic systems could be more efficient, but they’re often too complex and hard to use in the real world – we developed a new system that we think will change how they are used with robots,” Hines added.
LENS was demonstrated to recognize locations along an 8km (4.9 miles) route with remarkable efficiency. What’s more, it required just 180KB — a storage footprint nearly 300 times smaller than other systems.
Combination of technologies
How do the robots achieve this incredible efficiency? It’s a brilliant combination of technologies.
First, there’s a special kind of camera: an event camera.
“Rather than capturing a full image of the scene that takes in every detail in each frame, an event camera continuously senses changes and movement every microsecond,” explained Dr Tobias Fischer.
The camera functions by sensing pixel-level brightness changes, a process that closely mimics human visual processing.
This is important because visual place recognition—knowing where you are from what you see—is vital for humans and robots. Humans easily perform this task, yet it’s quite difficult for robots.
The “movement-focused” data is then processed by a brain-like spiking neural network on a low-power chip, all contained within a compact system.
“This study is a great example of working towards energy-efficient robotic systems that provide end-users with the performance and endurance they require for those robots to be useful in their application domains,” said Professor Michael Milford, one of the study authors.
RECOMMENDED ARTICLES
The LENS system could be equipped with various robots in the future. One day, it could enable disaster sites to be mapped more extensively, explore other planets for prolonged durations, or monitor marine environments with incredible staying power.
Research and development in the realm of neuromorphic systems is expanding at a drastic rate.
Last year, Intel launched Hala Point, the world’s largest neuromorphic computer system, aiming to make AI more sustainable. This brain-inspired computer processes information 50x faster while consuming 100x less energy than other systems.
The new findings were published in the journal Science Robotics.
0COMMENT
ABOUT THE AUTHOR
Mrigakshi Dixit Mrigakshi is a science journalist who enjoys writing about space exploration, biology, and technological innovations. Her work has been featured in well-known publications including Nature India, Supercluster, The Weather Channel and Astronomy magazine. If you have pitches in mind, please do not hesitate to email her.
MasterCard