
MIT’s Photonic Processor: Revolutionizing 6G Wireless with Light-Speed AI
As the demand for bandwidth surges with the proliferation of connected devices, managing the wireless spectrum efficiently becomes increasingly critical. Engineers are turning to artificial intelligence (AI) to optimize wireless spectrum usage, aiming to reduce latency and enhance performance. However, conventional AI methods for processing wireless signals often consume significant power and struggle to operate in real-time.
Researchers at MIT have engineered a groundbreaking AI hardware accelerator designed specifically for wireless signal processing. This innovative optical processor harnesses the speed of light to perform machine-learning computations, classifying wireless signals in mere nanoseconds.
The photonic chip operates approximately 100 times faster than leading digital alternatives, achieving about 95 percent accuracy in signal classification. Moreover, this hardware accelerator is scalable, flexible, and adaptable to diverse high-performance computing applications. It also boasts a smaller footprint, lighter weight, lower cost, and greater energy efficiency compared to digital AI hardware accelerators.
This device holds particular promise for future 6G wireless applications, including cognitive radios capable of optimizing data rates by dynamically adapting wireless modulation formats to evolving wireless environments.
By empowering edge devices to execute deep-learning computations in real-time, this new hardware accelerator could deliver substantial speed improvements across various applications beyond signal processing. For example, it could enable autonomous vehicles to make instantaneous decisions in response to changing environmental conditions or facilitate smart pacemakers in continuously monitoring patients’ heart health.
“There are numerous applications that could benefit from edge devices capable of analyzing wireless signals. Our work could unlock many possibilities for real-time and reliable AI inference. This is just the beginning of something potentially impactful,” notes Dirk Englund, a professor in MIT’s Department of Electrical Engineering and Computer Science, principal investigator in the Quantum Photonics and Artificial Intelligence Group and the Research Laboratory of Electronics (RLE), and senior author of the paper.
The research team also includes lead author Ronald Davis III PhD ’24; Zaijun Chen, a former MIT postdoc and current assistant professor at the University of Southern California; and Ryan Hamerly, a visiting scientist at RLE and senior scientist at NTT Research. The findings were published in Science Advances.
Light-Speed Processing
Current state-of-the-art digital AI accelerators for wireless signal processing convert signals into images and process them using deep-learning models for classification. While this approach is highly accurate, the computational demands of deep neural networks render it impractical for many time-critical applications.
Optical systems offer the potential to accelerate deep neural networks by encoding and processing data using light, which is also more energy-efficient than digital computing. However, researchers have struggled to optimize the performance of general-purpose optical neural networks for signal processing while ensuring scalability.
The MIT team overcame this challenge by developing an optical neural network architecture specifically tailored for signal processing, which they call a multiplicative analog frequency transform optical neural network (MAFT-ONN).
The MAFT-ONN addresses scalability by encoding all signal data and performing all machine-learning operations within the frequency domain, before wireless signals are digitized.
This design enables the optical neural network to perform all linear and nonlinear operations in-line, both of which are required for deep learning.
This innovative design means that only one MAFT-ONN device is needed per layer for the entire optical neural network, in contrast to other methods that require one device for each individual computational unit, or “neuron.”
“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.
The researchers leverage photoelectric multiplication, a technique that significantly enhances efficiency and enables the creation of an optical neural network that can be easily scaled up with additional layers without incurring extra overhead.
Results in Nanoseconds
MAFT-ONN receives a wireless signal as input, processes the signal data, and relays the information for subsequent operations performed by the edge device. For example, by classifying a signal’s modulation, MAFT-ONN enables a device to automatically deduce the type of signal to extract the data it carries.
A key challenge in designing MAFT-ONN was mapping the machine-learning computations to the optical hardware.
“We couldn’t simply use an off-the-shelf machine-learning framework. We had to customize it to fit the hardware and determine how to leverage the physics to perform the computations we wanted,” Davis explains.
Testing of the architecture on signal classification simulations revealed that the optical neural network achieved 85 percent accuracy in a single shot, rapidly converging to over 99 percent accuracy with multiple measurements. The entire process took approximately 120 nanoseconds.
“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.
While state-of-the-art digital radio frequency devices can perform machine-learning inference in microseconds, optics can achieve it in nanoseconds or even picoseconds.
Looking ahead, the researchers plan to implement multiplexing schemes to enhance computations and scale up the MAFT-ONN. They also aim to expand their research into more complex deep learning architectures capable of running transformer models or LLMs.
This work was supported, in part, by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.



