PROJECTS
Scientifically-backed innovation
Rolling Call for Abstracts
Sample Projects
Many more projects available to members on the private membership portal
3D Integration of AI Hardware with Direct Analog Input from Sensor Arrays
Jeehwan Kim
This research group works on AI hardware based on memristor neural networks with emphasis on ultra-low power operation for inference and online training and 3D integration of AI hardware and Si electronics.
A Framework to Evaluate Energy Efficiency and Performance of Analog Neural Networks
Vivienne Sze, Joel Emer
This project pursues an integrated framework that includes energy-modeling and performance evaluation tools to systematically explore and estimate the energy-efficiency and performance of Analog Neural Network architectures with full consideration of the electrical characteristics of the synaptic elements and interface circuits.
Boltzmann Network with Stochastic Magnetic Tunnel Junctions
Luqiao Liu, Marc Baldo
Networks formed by devices with intrinsic stochastic switching properties can be used to build Boltzmann machine, which has great efficiencies compared with traditional von Neumann architecture for cognitive computing due to the benefit from statistical mechanics of building blocks.
CMOS-Compatible Ferroelectric Synapse Technology for Analog Neural Networks
Jesús del Alamo
This research project investigates a new ferroelectric synapse technology based on metal oxides that is designed to be fully back-end CMOS compatible and promises operation with great energy efficiency.
Electrochemistry and Material Science of Proton-based Electrochemical Synapses
Bilge Yildiz, Ju Li
Electrochemical ionic-electronic devices have an immense potential to enable a new domain of programmable hardware for machine intelligence.
In-Memory Compute Accelerators
Anantha Chandrakasan
Many edge machine learning accelerators are responsible for processing and storing sensitive data that could be of value to attackers. This project plans to investigate side channel vulnerabilities and develop protections for at-edge custom in-memory computing (IMC) integrated circuits.
Natural Language Processing Accelerator for Transformer Models
Song Han, Anantha Chandrakasan
This project aims to develop efficient processors for natural language processing directly on an edge device to ensure privacy, low latency and extended battery life. The goal is to accelerate the entire transformer model (as opposed to just the attention mechanism) to reduce data movement across layers.
Neuroscience Guided Ionic Computing
Bilge Yildiz, Michale Fee, Jesus del Alamo, Ju Li
New approaches to brain-inspired computing could achieve greater than a million-fold improvements in energy efficiency.
TinyML: Enable Efficient Deep Learning on Mobile Devices
Song Han
This project pursues efficient machine learning for mobile devices where hardware resources and energy budgets are very limited.
Natural Language Processing Accelerator for Transformer Models
Song Han, Anantha Chandrakasan
This project aims to develop efficient processors for natural language processing directly on an edge device to ensure privacy, low latency and extended battery life. The goal is to accelerate the entire transformer model (as opposed to just the attention mechanism) to reduce data movement across layers.
In-Memory Compute Accelerators
Anantha Chandrakasan
Many edge machine learning accelerators are responsible for processing and storing sensitive data that could be of value to attackers. This project plans to investigate side channel vulnerabilities and develop protections for at-edge custom in-memory computing (IMC) integrated circuits.
TinyML: Enable Efficient Deep Learning on Mobile Devices
Song Han
This project pursues efficient machine learning for mobile devices where hardware resources and energy budgets are very limited.
A Framework to Evaluate Energy Efficiency and Performance of Analog Neural Networks
Vivienne Sze, Joel Emer
This project pursues an integrated framework that includes energy-modeling and performance evaluation tools to systematically explore and estimate the energy-efficiency and performance of Analog Neural Network architectures with full consideration of the electrical characteristics of the synaptic elements and interface circuits.
CMOS-Compatible Ferroelectric Synapse Technology for Analog Neural Networks
Jesús del Alamo
This research project investigates a new ferroelectric synapse technology based on metal oxides that is designed to be fully back-end CMOS compatible and promises operation with great energy efficiency.