Top
Back to All Events

Colloquium Series: Daniel Gehrig, "Event-Driven Perception: A Computational Paradigm for Adaptive Intelligence"

  • Bahen Centre for Information Technology, Room 3200 40 Saint George Street Toronto, ON, M5S 2E4 Canada (map)
Daniel Gehrig smiles facing the camera with his arms folded across his chest.

Speaker:

Daniel Gehrig

Talk Title:

Event-Driven Perception: A Computational Paradigm for Adaptive Intelligence

Date and Location:

Thursday, February 26, 2026

Bahen Centre for Information Technology, BA 3200

This lecture is open to the public. No registration is required, but space is limited.

The grad roundtable that follows the talk is open only to current University of Toronto Department of Computer Science graduate students.

Abstract:

Our world evolves across multiple time scales: long periods of stability are interrupted by critical, potentially life-threatening events. Perception systems must respond reliably in both regimes, and yet modern systems treat time uniformly. They oversample to avoid missing important information and thereby generate vast amounts of redundant data and computation. Biological systems, instead, perceive in an event-driven way: they transmit and process information only when a change occurs.

In this talk, I argue that event-driven perception — where representations are created only when change occurs — is a scalable computational principle. Using event cameras as a hardware instantiation, I show that such representations can capture high-speed phenomena without oversampling, while still enabling accurate signal reconstruction. I then address the algorithmic gap: how to design learning- based algorithms that operate directly on these representations, and are themselves event-driven, cutting redundant computation. Beyond cameras, I finally show how generalized events can emerge in learnable embedding spaces — as a byproduct of event-driven computation — and how to generate event-driven representations from new sensing modalities such as inertial measurements. These representations are shown to improve learning-based inertial navigation with increased parsimony.

To conclude, I outline a broader vision that extends event-driven perception to new sensing paradigms and embodied systems, toward more adaptive and parsimonious intelligence.

About Daniel Gehrig:

Daniel Gehrig is currently a postdoctoral researcher at the GRASP Lab at the University of Pennsylvania, working under the supervision of Prof. Kostas Daniilidis. He received his Ph.D. in 2023 from the University of Zurich (UZH), where he conducted research in computer vision and robotics at the Robotics and Perception Group (RPG) under the supervision of Prof. Davide Scaramuzza. For his doctoral work on “Efficient Data-driven Perception with Event Cameras”, he was awarded the highest distinction as well as the UZH Annual Award. Before his Ph.D., he completed his master’s degree in mechanical engineering at ETH Zurich in 2018, graduating with the highest distinction and receiving the Willi Studer Prize and ETH Medal for his work on event- and frame-based feature tracking at the Robotics and Perception Group. His research has been featured in IEEE Spectrum and on widely viewed science communication platforms such as Two Minute Papers.