Thursday, November 21, 2024

Transforming Radio Waves Into 3D Environmental Views

- Advertisement -

The University of Pennsylvania’s research team created a tool which will enhance robotic perception with radio waves in challenging environments.

PanoRadar can interpret reflective surfaces like glass and see through fog, two obstacles that LiDAR cannot overcome. Image credit: Sylvia Zhang/UPenn

Researchers from the University of Pennsylvania (UPenn) have developed ‘PanoRadar’ which is a tool designed to give robots superhuman vision by transforming simple radio waves into 3D environmental views. This development is set to address one of the biggest challenges in robotics that is perceiving the environment in difficult conditions such as heavy smoke, fog, or even through certain materials. Traditional vision systems like cameras and LiDAR struggle under such circumstances, but this tool can overcome these limitations. The technology will be crucial for industries focused on autonomous vehicles, robotics, and rescue operations where visibility can be compromised by harsh environmental factors.

The tool uses radio waves which can penetrate smoke and fog, offering a robust alternative for vision in low-visibility environments. It combines with advanced artificial intelligence (AI) to create high-resolution, 3D images in these challenging conditions. “Our initial question was whether we could combine the best of both sensing modalities,” said Mingmin Zhao, assistant professor, UPenn. This approach combines the resilience of radio signals with the detailed imaging capabilities of visual sensors, providing a clearer, more accurate view even in harsh conditions.

- Advertisement -

It operates similarly to a lighthouse, with a rotating array of antennas that scan the environment by emitting radio waves and capturing their reflections. However, the system goes a step further by using AI and signal processing to improve the resolution of these reflections thereby creating a virtual array of data points. This technique allows this tool to achieve imaging quality comparable to expensive LiDAR systems, but at a fraction of the cost.

One of the biggest hurdles the team faced was maintaining high-resolution imaging as the robot moved. “To achieve LiDAR-comparable resolution with radio signals, we needed to combine measurements from many different positions with sub-millimetre accuracy,” said Haowen Lai, Ph D student and lead author, UPenn. The team also integrated AI to help the system interpret its surroundings, leveraging the consistent patterns in indoor environments to improve accuracy.

The tool’s ability to see through smoke, glass, and other obstructions gives it a significant edge over traditional sensors. “The system maintains precise tracking through smoke and can even map spaces with glass walls,” said Gaoxiang Luo, master student and co-researcher, UPenn. This makes this tool ideal for applications in autonomous vehicles, robotics, and rescue missions where precise, reliable perception is essential.

Looking ahead, the team plans to integrate the tool with other sensor technologies to create even more multi-modal robotic perception systems. This will allow robots to adapt and respond to real-world challenges more effectively, developing their capabilities for crucial tasks like navigation and rescue operations.

Tanya Jamwal
Tanya Jamwal
Tanya Jamwal is passionate about communicating technical knowledge and inspiring others through her writing.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics