A photonic chip uses light to run a deep neural network on a chip, enabling fast processors that can learn instantly.
Critical for advanced machine-learning applications, deep neural network models have become so large and complex that they strain traditional electronic computing hardware. Photonic hardware, which uses light for computations, offers a faster and more energy-efficient solution but has historically faced limitations in performing all neural network computations, often requiring off-chip electronics that reduce speed and efficiency. MIT scientists and collaborators have developed a photonic chip that overcomes these challenges.
The fully integrated photonic processor performs all key neural network computations optically on the chip. It achieved more than 92% accuracy on a machine-learning classification task, completing computations in under half a nanosecond—matching traditional hardware performance. Fabricated using standard commercial processes, the chip is scalable and can integrate with existing electronics.
Machine learning with light
Deep neural networks use layers of interconnected nodes to process data through linear operations like matrix multiplication and nonlinear ones like activation functions, enabling them to solve complex problems. In 2017, researchers demonstrated an optical neural network on a photonic chip for matrix multiplication, but digital processors were needed for nonlinear tasks. They later developed nonlinear optical function units (NOFUs) to perform linear and nonlinear operations directly on the chip, creating a three-layer deep neural network.
A fully-integrated network
The system encodes deep neural network parameters into light, processed using programmable beamsplitters for matrix multiplication. The data then passes through nonlinear optical function units (NOFUs), which implement nonlinear functions by converting a small amount of light into electric current using photodiodes. This energy-efficient process eliminates the need for external amplifiers. It supports in situ training, enabling the chip to achieve over 96% accuracy during training and 92% during inference, comparable to traditional hardware. The chip performs these computations in under half a nanosecond.
Fabricated using standard CMOS foundry processes, the chip can be manufactured at scale with minimal errors. Future work will focus on scaling the device, integrating it with real-world electronics like cameras and telecommunications systems, and developing algorithms that harness the unique advantages of optical computing to enable faster, more energy-efficient training and inference.