Home Blog Newsfeed Photonic processor could streamline 6G wireless signal processing
Photonic processor could streamline 6G wireless signal processing

Photonic processor could streamline 6G wireless signal processing

As the digital landscape evolves, the burgeoning demand for wireless bandwidth, driven by teleworking, cloud computing, and a myriad of connected devices, presents an immense challenge in managing the finite wireless spectrum. Current artificial intelligence methods, while promising for dynamic spectrum management, often fall short due to their power consumption and inability to operate in real-time.

In a significant leap forward, researchers at the Massachusetts Institute of Technology (MIT) have unveiled a groundbreaking AI hardware accelerator specifically engineered for wireless signal processing. This novel optical processor harnesses the speed of light to perform machine-learning computations, enabling the classification of wireless signals in mere nanoseconds.

Dubbed the multiplicative analog frequency transform optical neural network (MAFT-ONN), this photonic chip boasts an astonishing speed, operating approximately 100 times faster than its leading digital counterparts. Despite its rapid processing, it achieves impressive accuracy, converging to about 95 percent in signal classification. Beyond its speed, the MAFT-ONN stands out for its scalability, flexibility, and remarkable efficiency, being smaller, lighter, cheaper, and significantly more energy-efficient than existing digital AI hardware accelerators.

The implications of this innovation are vast, particularly for future 6G wireless applications. It could revolutionize cognitive radios, allowing them to dynamically optimize data rates by adapting wireless modulation formats to the ever-changing wireless environment. Furthermore, its capacity for real-time deep-learning computations at the edge holds potential for dramatic speedups in various applications, from enabling autonomous vehicles to make split-second decisions in response to environmental shifts to empowering smart pacemakers to continuously monitor a patient’s heart health.

Dirk Englund, a professor in the MIT Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper, emphasizes the transformative potential. “There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” Englund states.

The research, detailed in a paper co-authored by lead author Ronald Davis III PhD ’24, Zaijun Chen (a former MIT postdoc now an assistant professor at the University of Southern California), and Ryan Hamerly (a visiting scientist at RLE and senior scientist at NTT Research), appeared recently in Science Advances.

The MAFT-ONN tackles the scalability challenge by encoding all signal data and executing machine-learning operations within the frequency domain, prior to the digitization of wireless signals. Its innovative design allows for the performance of all linear and nonlinear operations in-line, crucial for deep learning. Unlike other methods requiring a device for each computational unit, MAFT-ONN necessitates only one device per layer for the entire optical neural network. “We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” explains Davis.

This efficiency is largely attributed to a technique called photoelectric multiplication, which also facilitates the ready scaling of the optical neural network with additional layers without significant overhead. In simulations for signal classification, the MAFT-ONN achieved 85 percent accuracy in a single shot, rapidly converging to over 99 percent accuracy with multiple measurements, all within approximately 120 nanoseconds per inference.

Looking ahead, the researchers aim to expand the MAFT-ONN’s capabilities through multiplexing schemes, enabling more complex computations and further scaling. Their ambition extends to adapting the technology for more intricate deep learning architectures, including transformer models and large language models (LLMs).

This groundbreaking work received funding support from various esteemed organizations, including the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.

Add comment

Sign Up to receive the latest updates and news

Newsletter

© 2025 Proaitools. All rights reserved.