Event-based Vision

Efficient object detection using event-based vision algorithms, utilizing the low energy and low latency characteristics of neuromorphic sensors for edge deployment.

“Event-based Vision” is an exciting new method in computer vision, drawing inspiration from human visual perception. Unlike conventional techniques that process sequences of individual images, event-based vision focuses on identifying distinct visual events triggered by changes in scene brightness. This approach enables more energy-efficient processing and reduces latency in managing visual data.

YCB-Ev Dataset created at Fraunhofer IGD

We are developing algorithms for event-based object pose estimation, useful in applications such as autonomous vehicles, robotics, and augmented reality. By leveraging recent machine learning and computer vision advancements, we use synthetic training without manual labeling. Our models are efficient for deployment on edge devices like the Nvidia Jetson Orin Nano, exploring event-based vision’s benefits in real-world scenarios.

Projects

Publications

Related topics