Close your eyes and picture the iconic “bullet time” scene from The Matrix — where hacker Neo, played by Keanu Reeves, dodges bullets in slow motion.
Now imagine being able to witness the same effect, but instead of speeding bullets, you’re watching something that moves one million times faster: light itself.
This is now possible, thanks to new research from University of Toronto computer scientists who have built an advanced camera setup that — for the first time — can visualize light in motion from any perspective, opening avenues for further inquiry into new types of 3D sensing techniques.
Dubbed “Flying with Photons” by the researchers, this computational imaging work can capture ultrafast moments of a scene — like a pulse of light speeding through a pop bottle or bouncing off a mirror — from multiple viewpoints.
By developing a sophisticated AI algorithm, the researchers can simulate what a scene would look like from any vantage point. According to David Lindell, co-author and assistant professor in the Department of Computer Science, this means having the ability to generate videos where the camera appears to “fly” alongside the very photons of light as they travel.
The researchers believe this advancement has the potential to unlock new capabilities in several important research areas, including advanced sensing capabilities such as non-line-of-sight imaging, a method that allows viewers to “see” around corners or behind obstacles using multiple bounces of light; imaging through scattering media, such as fog, smoke, biological tissues or turbid water; and 3D reconstruction, where understanding the behaviour of light that scatters multiple times is critical.
“Our technology can capture and visualize the actual propagation of light with the same dramatic, slowed-down detail. We get a glimpse of the world at speed-of-light timescales that are normally invisible,” Lindell explains.
This new research by second-year computer science PhD student Anagh Malik, fourth-year engineering science undergraduate Noah Juravsky, computer science professors David Lindell and Kyros Kutulakos, along with Gordon Wetzstein, an associate professor and Ryan Po, a second-year PhD student from Stanford University, was recently presented at the 2024 European Conference on Computer Vision.
The researchers’ key innovation lies in the AI algorithm they developed to visualize ultrafast videos from any viewpoint, a challenge known in computer vision as “novel view synthesis.”
Traditionally, novel view synthesis methods are designed for images or videos captured with regular cameras. However, the researchers extended this concept to handle data captured by an ultrafast camera operating at speeds comparable to light, which posed unique challenges, such as their algorithm having to account for the speed of light and model how light propagates through a scene.
With this work, researchers observed for the first time a moving-camera visualization of light in motion: refracting through water, bouncing off a mirror or scattering off a surface. They also demonstrated how to visualize phenomena that only occur at a significant portion of the speed of light, as predicted by Einstein in his theory of relativity. For example, they visualize the “searchlight effect” which makes objects brighter when moving toward an observer, and “length contraction,” where fast-moving objects look shorter in the direction they are travelling. The researchers were able create a way to see how objects would appear to contract in length when moving at such high speeds.
While current algorithms for processing ultrafast videos typically focus on analyzing a single video from a single viewpoint, the researchers explain their work is the first to extend this analysis to multi-view light-in-flight videos, allowing for the study of how light propagates from multiple perspectives.
“Our multi-view light-in-flight videos serve as a powerful educational tool, offering a unique way to teach the physics of light transport,” says Malik. “By visually capturing how light behaves in real-time — whether refracting through a material or reflecting off a surface — we can get a more intuitive understanding of the motion of light through a scene.”
“Additionally, our technology could inspire creative applications in the arts, such as filmmaking or interactive installations, where the beauty of light transport can be used to create new types of visual effects or immersive experiences,” he adds.
The research also holds significant potential for improving LIDAR (Light Detection and Ranging) sensor technology used in autonomous vehicles. Typically, these sensors process data to create 3D images right away, but the researchers’ work suggests a new approach: keep the raw data, which includes detailed light patterns, and use it later. They believe doing so could help create systems that perform better than conventional LIDAR to see more details, look through obstacles, and understand materials better.
While this specific project focused on visualizing how light moves through a scene from any direction, the researchers note that as light propagates, it carries “hidden information” about the shape and appearance of everything it touches. As the researchers look to their next steps, they want to unlock this information by developing a method that uses multi-view light-in-flight videos not just to see light moving, but to reconstruct the 3D geometry and appearance of the entire scene.
“This means we could potentially create incredibly detailed, three-dimensional models of objects and environments — just by watching how light travels through them,” concludes Lindell.