Physics

Novel AI algorithm captures moving photons

The scene, rendered using video from an ultra-high-speed camera, shows pulses of light passing through the pop bottle, scattering off the liquid, hitting the ground, and reflecting back to focus on the cap. Credit: University of Toronto

Close your eyes and think of the iconic “bullet time” scene from The Matrix. This is a scene where Neo, played by Keanu Reeves, avoids bullets in slow motion. Now imagine witnessing the same effect. However, instead of the speed of the bullet, you’re looking at something moving a million times faster: light itself.

Computer scientists at the University of Toronto have built an advanced camera setup that allows them to visualize moving light from any perspective, paving the way for further research into a new type of 3D sensing technology.

Researchers have developed an advanced AI algorithm that can simulate how a hypervelocity scene – a pulse of light speeding through a pop bottle or bouncing off a mirror – would look like from any perspective. .

The work will be published on the arXiv preprint server.

David Lindell, an assistant professor in the Department of Computer Science in the College of Arts and Sciences, said the feat requires the ability to produce video that appears to have the camera “flying” along with the very photons of light as they travel. states.

“Our technique allows us to capture and visualize the actual propagation of light in the same dramatic, slow-moving detail,” Lindell says. “We can see a glimpse of the world on a timescale of the speed of light that we would normally not be able to see.”

Credit: University of Toronto

Researchers believe this approach, recently presented at the 2024 European Conference on Computer Vision, has the potential to unlock new capabilities in several important research areas, including: Uses multiple reflections of light to “see” around corners and behind obstacles. Imaging through scattering media such as fog, smoke, biological tissue, and turbid water. For 3D reconstruction, it is important to understand the behavior of light that is scattered multiple times.

In addition to Lindell, the research team also included a U of T computer science Ph.D. student Anna Malik, fourth-year engineering student Noah Jurafsky, professor Kairos Koutulakos, Stanford University associate professor Gordon Wetzstein and Ph.D. Student Ryan Poe.

The researchers’ key innovation lies in the AI ​​algorithm they developed to visualize ultra-fast videos from any perspective. This is a challenge known in computer vision as “novel view synthesis.”

Traditionally, new view synthesis methods were designed for images or videos taken with regular cameras. But researchers have extended this concept to work with data captured by ultrafast cameras operating at speeds comparable to light. This created unique challenges, including the need for algorithms to account for the speed of light and model how light propagates through the scene. .

Throughout their study, the researchers used a moving camera to visualize light in motion, including refraction in water, reflection in mirrors, and scattering on surfaces. They also demonstrated how to visualize phenomena that only occur at a significant fraction of the speed of light, as predicted by Albert Einstein.

For example, visualize the “searchlight effect,” in which an object becomes brighter as it moves towards the observer, or “length contraction,” in which a fast-moving object appears shorter in the direction of travel. The researchers were also able to create a way to see how an object’s length appears to contract when moving at such high speeds.

Discover the latest in science, technology and space with over 100,000 subscribers who use Phys.org as their daily source of information. Sign up for our free newsletter to receive daily or weekly updates on breakthroughs, innovations, and important research.

While current algorithms for processing ultra-high-speed video typically focus on analyzing a single video from a single perspective, the researchers believe that this analysis can be done using multi-view, in-flight optical This is the first study to extend this to video, and says it can be used to study how to analyze it. Light travels from multiple perspectives.

“Our multi-view light flight videos serve as powerful educational tools and provide a unique way to teach the physics of light transport,” Malik said. “By visualizing how light behaves in real time, whether it’s refracting through a material or reflecting off a surface, we can more intuitively understand how light moves through a scene. It will be.

“Furthermore, our technology has the potential to inspire creative applications in artistic fields such as filmmaking and interactive installations, where the beauty of light transport is harnessed to create new types of visual effects and immersive experiences. can be produced.”

This research also has great potential to improve LIDAR (light detection and ranging) sensor technology used in self-driving cars. Typically, these sensors process the data to instantly create a 3D image. But the researchers’ work shows they can outperform traditional LIDAR by storing raw data, including detailed light patterns, to see more details, see through obstacles, and inspect materials. This suggests that it may be useful for building systems that can be better understood.

The researchers’ project focuses on visualizing how light moves through a scene from all directions, but light also has hidden “hidden” information about the shape and appearance of everything it touches. It is noteworthy that it contains “information about Looking to the next step, the researchers hope to uncover this information by developing a method to reconstruct the 3D shape and appearance of the entire scene using multi-view in-flight video. .

“This means we have the potential to create incredibly detailed three-dimensional models of objects and environments simply by observing how light passes through them,” Lindell said. says Mr.

Further information: Anagh Malik et al., Flying with Photons: Rendering Novel Views of Propagating Light, arXiv (2024). DOI: 10.48550/arxiv.2404.06493

Magazine information: arXiv

Provided by University of Toronto

Citation: New AI algorithm captures photons in motion (November 19, 2024) from https://phys.org/news/2024-11-ai-algorithm-captures-photons-motion.html November 2024 Retrieved on 19th

This document is subject to copyright. No part may be reproduced without written permission, except in fair dealing for personal study or research purposes. Content is provided for informational purposes only.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button