SynSense SPECK™
Our system is a fully neuromorphic localization ecosystem on the SynSense SPECK™, a single chip event camera and SoC neuromorphic processor.
Autonomous robots need effective localization and environmental understanding for real-world deployment. Visual place recognition is crucial for mapping and localizing positions but typically requires computationally demanding models. We present a compact, efficient solution using neuromorphic computing inspired by the brain, combining spiking neural networks, dynamic vision sensors, and a neuromorphic processor on a single SPECK™ chip. Our Locational Encoding with Neuromorphic Systems (LENS) can learn and recognize places with models over 99% smaller than traditional systems, using less than 1% of the energy. Deployed on a Hexapod robot, LENS demonstrates real-time, energy-efficient place recognition in varied environments, using fewer than 35k parameters to learn over 700 places. This is one of the first fully neuromorphic systems for real-time large-scale localization on robotic platforms.
Our system is a fully neuromorphic localization ecosystem on the SynSense SPECK™, a single chip event camera and SoC neuromorphic processor.
We take our LENS system and deploy it on a Hexapod robotic platform for multi-terrain, multi-environment mapping and localization.
A key benefit of our system is the energy effeciency afforded by using neuromorphic hardware and sensors. LENS needs <1% the power required by conventional von Neumann CPUs and Jetson platforms.
LENS shows impressive localization in both small and large scale environments. With model sizes smaller than 150 KB with just 35k parameters, the scalability of our system allows for place recognition for traversals up to 8km.
LENS trains on static DVS frames of events collected over a user-specified timewindow. Temporal representations of event frames are efficiently trained in minutes for rapid deployment. Event counts create identifiable place representations through their unique, individual spiking patterns.
This work was developed as an extension to a variety of excellent work in robotic localization.
Sparse event VPR develops the concept of using small number of event pixels to perform accurate localization.
VPRSNN introduced one of the first spiking neural networks for visual place recognition, which inspired previous work for an efficiently trained and inferenced network VPRTempo, which was adapted for this work.
In addition to this, a lot of great work has been done in the localization and navigation field using neuromorphic hardware.
Fangwen Yu's work developed an impressive multi-modal neural network for accurate place recognition. Le Zhu pioneered sequence learning using event cameras through vegetative environments. Tom van Dijk deployed an impressively compact neuromorphic system on a tiny autonomous drone for visual route following.
@misc{hines2024compactneuromorphicultraenergyefficient,
title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization},
author={Adam D. Hines and Michael Milford and Tobias Fischer},
year={2024},
eprint={2408.16754},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2408.16754},
}