Top

U of T computer scientists develop video camera that acts as a ‘microscope for time’

Sotiris Nousias and Mian Wei work on an experimental setup that uses a specialized camera and an imaging technique that timestamps individual particles of light to replay video across large timescales. (Photo: Matt Hintsa)

What if you could record a video and control its playback speed by a factor of billions? A breakthrough from U of T computational imaging researchers allows a camera to capture everything from the bounce of light off a mirror to the bounce of a ball on a basketball court — all in one take. 

Dubbed by one researcher as a “microscope for time,” the imaging technique paired with a specialized camera could lead to improvements in a range of domains, from medical imaging to the LIDAR in mobile phones and self-driving cars.   

This research was carried out by computer science PhD student Mian Wei, postdoctoral fellow Sotiris Nousias, electrical and computer engineering PhD alumnus Rahul Gulve, Assistant Professor David Lindell and Professor Kyros Kutulakos, who are members of the Toronto Computational Imaging Group.  

They recently presented their findings at the 2023 International Conference on Computer Vision and received one of two best paper awards at the conference, out of more than 2,100 papers presented.  

“Our work introduces a unique camera capable of capturing videos that can be replayed at speeds ranging from the standard 30 frames per second to hundreds of billions of frames per second. With this technology, you no longer need to predetermine the speed at which you want to capture the world,” explains Nousias. 

“Our camera is fast enough to even let us see light moving through a scene. This type of slow and fast imaging where we can capture video across such a huge range of timescales has never been done before,” says Wei. 

While conventional high-speed cameras can record video up to around one million frames per second without a dedicated light source — fast enough to capture videos of a speeding bullet — they are too slow to capture the movement of light. The researchers add that a fundamental bottleneck is hit when trying to image much faster than a speeding bullet without a synchronized light source such as strobe light or a laser, because very little light is collected during such a short exposure period, and a significant amount of light is needed to form an image. 

To overcome issues of light deficiency and need for a dedicated light source, the researchers use a special type of ultra-sensitive sensor called a free-running single-photon avalanche diode (SPAD). This sensor operates by timestamping the arrival of individual photons (particles of light) with precision down to trillionths of a second. To recover a video, they use a computational algorithm that analyzes when the photons arrive and estimates how much light is incident on the sensor for any given instant in time, regardless of whether that light came from room lights, sunlight, or even from lasers operating nearby. Reconstructing and playing back a video is a matter of retrieving the light levels corresponding to each video frame, the researchers note. 

An illustration that shows a room with a single-photon camera that can detect the arrival of individual photons.

The novel method operates on a set of timestamps of the arrival of individual photons detected by a single photon camera. 

This novel imaging regime the researchers refer to as “passive ultra-wideband imaging” enables post-capture refocusing in time — from transient to everyday timescales.  

“You don’t need to know what happens in the scene, or what light sources are there. You can record information and you can refocus on whatever phenomena or whatever timescale you want,” says Nousias. 

Wei compares their approach to combining the various video modes on a smartphone: slow-motion, normal video and time lapse. 

“In our case, our camera has just one recording mode that records all timescales simultaneously and then afterwards, we can decide,” he explains. “We can see every single timescale because if something’s moving too fast, we can zoom into that timescale, if something’s moving too slow, we can zoom out and see that too.” 

Using an experimental setup that employed multiple external light sources and a spinning fan, the team demonstrated their method’s ability to allow for post-capture timescale selection. In their demonstration, they use photon timestamp data captured by a free-running SPAD camera to play back video of a rapidly spinning fan at both 1,000 frames per second and 250 billion frames per second. 

The imaging technique passively captures a dynamic scene once and allows re-rendering of video across multiple timescales.

The researchers say that while the sensors that have this capability to timestamp photons already exist — it’s an emerging technology that’s been deployed on iPhones in their LIDAR and their proximity sensor— no one has used the photon timestamps in this way to enable this type of ultra-wideband single photon imaging.  

“What we provide is a microscope for time. So, with the camera you record everything that happened and then you can go in and observe the world at imperceptibly fast timescales. Such capability can open up a new understanding of nature and the world around us," says Kutulakos.  

This ability to image without a dedicated light source across a huge range of timescales at up to hundreds of billions of frames per second could drive applications in other areas, the researchers note.  

“In biomedical imaging, you might want to be able to image across a huge range of timescales at which biological phenomena occur. For example, protein folding and binding happen across timescales from nanoseconds to milliseconds,” says Lindell. “In other applications like mechanical inspection, maybe you’d like to image an engine or a turbine for many minutes or hours and then after collecting the data, zoom in to a timescale where an unexpected anomaly or failure occurs.” 

In scenarios with self-driving cars, where each vehicle may use an active imaging system like LIDAR to emit light pulses, there can be challenges related to potential interference with other systems on the road. However, the researchers say their technology could “turn this problem on its head” by capturing and using “ambient” photons. For example, they say it might be possible to create universal light sources that any car, robot or smartphone can use without requiring the explicit synchronization that is needed by today’s LIDAR systems.  

Astronomy is among many areas where this method could lead to imaging advancements. One U of T researcher sees potential in using this technology to help understand celestial phenomena like fast radio bursts, which last only a few thousandths of a second. 

“Currently, there is a strong focus on pinpointing the optical counterparts of these fast radio bursts more precisely in their host galaxies. This is where the techniques developed by this group, particularly their innovative use of SPAD cameras, can be valuable,” says Suresh Sivanandam, interim director of the Dunlap Institute for Astronomy and Astrophysics and associate professor at the David A. Dunlap Department of Astronomy and Astrophysics. 

The researchers are optimistic about the potential use cases for their new approach. 

“Overall, we believe that using individual photon arrival times to capture video could have far reaching implications for many imaging applications,” say the researchers. “We're excited to explore how these 'microscopes for time' could result in new capabilities for 3D imaging, biomedical imaging, astronomical sensing, and beyond.” 

Sotiris Nousias headshot.

Postdoctoral fellow Sotiris Nousias

Mian Wei headshot.

PhD student Mian Wei

David Lindell headshot.

Assistant Professor David Lindell

Kyros Kutulakos headshot

Professor Kyros Kutulakos