LENS Logo
QUT Centre for Robotics, Brisbane QLD, Australia

Real-time, event-driven localization

Multi-environment on-robot deployment

LENS Locational Encoding with Neuromorphic Systems

LENS is a compact, brain-inspired localization system for autonomous robots. Using spiking neural networks, dynamic vision sensors, and a neuromorphic processor on a single SPECK™ chip, LENS performs real-time, event-driven place recognition with models 99% smaller and over 100× more energy-efficient than traditional systems. Deployed on a Hexapod robot, it can learn and recognize over 700 places using fewer than 44k parameters—demonstrating the first large-scale, fully event-driven localization on a mobile platform.

SynSense SPECK™

Our system is a fully neuromorphic localization ecosystem developed for the SynSense SPECK™, combining an event camera and System-on-Chip neuromorphic processor.

SynSense SPECK in a 3D printed housing

Robotic Deployment

LENS is a fully capable system for deployment on resource constrained robotic platforms for multi-terrain, multi-environment mapping and localization.

Hexapod robot

Performance

Energy efficiency

When deployed on neuromorphic hardware, LENS uses <1% the power required by conventional compute platforms.

Energy effeciency metrics

Localization performance

LENS shows impressive localization accuracy, outcompeting state-of-the-art VPR methods. With model sizes smaller than 180 KB with just 44k parameters able to map up to 8km.

Localization performance

Unique training

LENS trains on static DVS frames of events collected over a user-specified timewindow. Temporal representations of event frames are efficiently trained in minutes for rapid deployment. Event counts create identifiable place representations through their unique, individual spiking patterns.

Related Links

This work was developed as an extension to a variety of excellent work in robotic localization.

Sparse event VPR develops the concept of using small number of event pixels to perform accurate localization.

VPRSNN introduced one of the first spiking neural networks for visual place recognition, which inspired previous work for an efficiently trained and inferenced network VPRTempo, which was adapted for this work.

In addition to this, a lot of great work has been done in the localization and navigation field using neuromorphic hardware.

Fangwen Yu's work developed an impressive multi-modal neural network for accurate place recognition. Le Zhu pioneered sequence learning using event cameras through vegetative environments. Tom van Dijk deployed an impressively compact neuromorphic system on a tiny autonomous drone for visual route following.

BibTeX

@article{hines2025lens,
      title={A compact neuromorphic system for ultra energy-efficient, on-device robot localization}, 
      author={Adam D. Hines and Michael Milford and Tobias Fischer},
      journal={},
      year={2025},
      volume={},
      number={},
      doi={},
      url={}
}