Close your eyes and imagine the iconic “bullet time” scene from The Matrix – the one where Neo, played by Keanu Reeves, dodges bullets in slow motion. Now imagine you can witness the same effect, but instead of bullets speeding up, you observe something moving a million times faster: light itself.
Computer scientists at the University of Toronto have built an advanced camera setup capable of viewing moving light from any perspective, paving the way for further research into new types of 3D sensing techniques.
Researchers developed a sophisticated AI algorithm that could simulate what a lightning-fast scene — a pulse of light passing through a pop bottle or bouncing off a mirror — would look like from any vantage point.
David Lindell, an assistant professor in the Department of Computer Science in the Faculty of Arts and Sciences, says this feat requires the ability to generate videos in which the camera appears to “fly” alongside photons of light as they move.
“Our technology can capture and visualize the actual propagation of light in the same dramatic, slow-motion detail,” says Lindell. “We are getting a glimpse of the world on light-speed timescales that are normally invisible.”
The researchers believe that the approach, which was recently presented at the 2024 European Conference on Computer Vision can unlock new capabilities in several important areas of research, including: advanced sensing capabilities such as non-line-of-sight imaging, a method that allows viewers to “see” around corners or behind obstacles using multiple bounces of light ; imaging through diffusing media, such as fog, smoke, biological tissue, or turbid water; and 3D reconstruction, where understanding the behavior of light scattering multiple times is essential.
In addition to Lindell, the research team included Anagh Malik, a doctoral student in computer science at the University of Toronto, Noah Juravsky, a fourth-year bachelor of science in engineering student, Professor Kyros Kutulakos, the associate professor of the Stanford University Gordon Wetzstein and doctoral student Ryan Po.
The researchers’ key innovation is the AI algorithm they developed to view high-speed videos from any viewpoint – a challenge known in computer vision as “new view synthesis” .
/Public broadcast. This material from the original organization/authors may be timely in nature and edited for clarity, style, and length. Mirage.News takes no institutional position or party, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.