Rob Matheson | MIT News Office
A new area in artificial intelligence involves using algorithms to automatically design machine-learning systems known as neural networks, which are more accurate and efficient than those developed by human engineers. But this so-called neural architecture search (NAS) technique is computationally expensive.
In a paper at the International Conference on Learning Representations, MIT researchers describe a NAS algorithm that can directly learn specialized convolutional neural networks (CNNs) for target hardware platforms — when run on a massive image dataset — in only 200 graphical processing unit (GPU) hours compared to 48,000 GPU of other NAS algorithms, which could enable far broader use of these types of algorithms.
Complete article from MIT News.
Explore
Photonic Processor Could Enable Ultrafast AI Computations with Extreme Energy Efficiency
Adam Zewe | MIT News
This new device uses light to perform the key operations of a deep neural network on a chip, opening the door to high-speed processors that can learn in real-time.
AI Method Radically Speeds Predictions of Materials’ Thermal Properties
Adam Zewe | MIT News
The approach could help engineers design more efficient energy-conversion systems and faster microelectronic devices, reducing waste heat.
A New Way to Let AI Chatbots Converse All Day without Crashing
Adam Zewe | MIT News
Researchers developed a simple yet effective solution for a puzzling problem that can worsen the performance of large language models such as ChatGPT.