Adam Zewe | MIT News Office
November 29, 2022
As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.
An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.
However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.
Complete article from MIT News.
Explore
Energy-Efficient and Environmentally Sustainable Computing Systems Leveraging Three-Dimensional Integrated Circuits
Wednesday, May 14, 2025 | 12:00 - 1:00pm ET
Hybrid
Zoom & MIT Campus
Analog Compute-in-Memory Accelerators for Deep Learning
Wednesday, April 30, 2025 | 12:00 - 1:00pm ET
Hybrid
Zoom & MIT Campus
AI Tool Generates High-Quality Images Faster Than State-of-the-Art Approaches
Adam Zewe | MIT News
Researchers fuse the best of two popular methods to create an image generator that uses less energy and can run locally on a laptop or smartphone.