
MIT’s Photonic Processor: Revolutionizing 6G Wireless Signal Processing with AI
As the demand for bandwidth surges with the proliferation of connected devices, managing the wireless spectrum efficiently becomes increasingly critical. Artificial intelligence (AI) offers a promising avenue for dynamic spectrum management, but current AI methods often suffer from high power consumption and real-time processing limitations.
Researchers at MIT have engineered an innovative AI hardware accelerator tailored for wireless signal processing. This photonic processor leverages light to perform machine-learning computations at unparalleled speeds, classifying wireless signals in mere nanoseconds.
The photonic chip demonstrates remarkable performance, operating 100 times faster than leading digital alternatives while achieving approximately 95 percent accuracy in signal classification. Its scalability, flexibility, and energy efficiency position it as a compelling solution for diverse high-performance computing applications. Moreover, the hardware accelerator boasts a smaller footprint, reduced weight, lower cost, and improved energy efficiency compared to traditional digital AI hardware accelerators.
This advancement holds particular promise for future 6G wireless applications, including cognitive radios that dynamically optimize data rates by adapting wireless modulation formats to changing environmental conditions.
By enabling edge devices to execute deep-learning computations in real-time, this new hardware accelerator unlocks significant speed improvements across various domains beyond signal processing. Autonomous vehicles can make instantaneous decisions based on environmental changes, and smart pacemakers can continuously monitor a patient’s cardiac health.
Dirk Englund, a professor at MIT’s Department of Electrical Engineering and Computer Science, emphasized the transformative potential of their work. “There are many applications that would be enabled by edge devices that are capable of analyzing wireless signals. What we’ve presented in our paper could open up many possibilities for real-time and reliable AI inference. This work is the beginning of something that could be quite impactful,” he says.
The research team, led by Ronald Davis III PhD ’24, includes Zaijun Chen from the University of Southern California, and Ryan Hamerly from NTT Research. Their findings have been published in Science Advances.
Light-Speed Processing
Existing digital AI accelerators convert wireless signals into images and analyze them using deep-learning models. Although accurate, the computational demands of deep neural networks hinder their suitability for time-critical applications.
Optical systems accelerate deep neural networks by processing data using light, consuming less energy than digital computing. However, maximizing the performance of general-purpose optical neural networks for signal processing while maintaining scalability has been a challenge.
MIT researchers overcame this limitation by creating an optical neural network architecture specifically designed for signal processing, called a multiplicative analog frequency transform optical neural network (MAFT-ONN). This design addresses scalability by encoding all signal data and conducting machine-learning operations within the frequency domain, prior to signal digitization.
The optical neural network performs both linear and nonlinear operations inline. Only one MAFT-ONN device is needed per layer for the entire optical neural network, differing from other methods that require a device for each individual computational unit.
“We can fit 10,000 neurons onto a single device and compute the necessary multiplications in a single shot,” Davis says.
Results in Nanoseconds
MAFT-ONN processes wireless signal data and relays the information for subsequent edge device operations. For example, by classifying a signal’s modulation, a device can automatically determine the signal type to extract the data.
A key challenge was mapping machine-learning computations to the optical hardware. The team had to customize a machine-learning framework to fit the hardware and exploit physics to perform the desired computations.
In simulations, the optical neural network achieved 85 percent accuracy in signal classification in a single shot, which quickly converged to more than 99 percent accuracy using multiple measurements. The entire process only required about 120 nanoseconds.
“The longer you measure, the higher accuracy you will get. Because MAFT-ONN computes inferences in nanoseconds, you don’t lose much speed to gain more accuracy,” Davis adds.
The researchers plan to use multiplexing schemes to increase computational capabilities and scale up MAFT-ONN. They also aim to apply their work to more complex deep learning architectures.
The research was funded by the U.S. Army Research Laboratory, the U.S. Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone, and the National Science Foundation.