Steve Nadis | MIT CSAIL
February 7, 2022
A new tensor language developed at MIT, with formally verified optimizations, could have benefits for high-performance computing.
High-performance computing is needed for an ever-growing number of tasks — such as image processing or various deep learning applications on neural nets — where one must plow through immense piles of data, and do so reasonably quickly, or else it could take ridiculous amounts of time. It’s widely believed that, in carrying out operations of this sort, there are unavoidable trade-offs between speed and reliability. If speed is the top priority, according to this view, then reliability will likely suffer, and vice versa.
However, a team of researchers, based mainly at MIT, is calling that notion into question, claiming that one can, in fact, have it all.
Complete article from MIT News.
Explore
AI Tool Generates High-Quality Images Faster Than State-of-the-Art Approaches
Adam Zewe | MIT News
Researchers fuse the best of two popular methods to create an image generator that uses less energy and can run locally on a laptop or smartphone.
Photonic Processor Could Enable Ultrafast AI Computations with Extreme Energy Efficiency
Adam Zewe | MIT News
This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.
New Security Protocol Shields Data From Attackers During Cloud-based Computation
Adam Zewe | MIT News
The technique leverages quantum properties of light to guarantee security while preserving the accuracy of a deep-learning model.