Friday, May 6, 2022 | 9:30am – 3:00pm ET
Multiple Speakers

The MIT AI Hardware Program is a new academia and industry collaboration aimed at defining and developing translational technologies in hardware and software for the AI and quantum age. A collaboration between the MIT School of Engineering and MIT Schwarzman College of Computing, involving the Microsystems Technologies Laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that will deliver enhanced energy efficiency systems for cloud and edge computing.

Join us for its inaugural symposium on Friday, May 6th, 2022 at MIT, Building 46 Singleton Auditorium and Atrium.

Limited registration for the MIT community and corporate members only.

For detailed area hotel and MIT campus visit information, please go to Visit MIT

AGENDA

Time Event
9:30 – 10:00 Registration and Breakfast
10:00 – 10:15 Introduction to the MIT AI Hardware Program
Jesus del Alamo & Aude Oliva – Program Co-Leads
10:15 – 10:30 Welcome
Anantha Chandrakasan – Dean of Engineering, and Vannevar Bush Professor, Department of Electrical Engineering and Computer ScienceDaniel Huttenlocher – Dean of the Schwarzman College, and Henry Ellis Warren (1894) Professor, Department of Electrical Engineering and Computer ScienceHae-Seung ‘Harry’ Lee – Director of Microsystems Technology Laboratories, and Professor, Department of Electrical Engineering and Computer Science
10:30 – 11:00 Keynote – Brain Guided Intelligence Hardware
Bilge Yildiz – Professor, Department of Nuclear Science and Engineering, and Professor, Department of Materials Science and EngineeringThis research looks at electrochemical synapses as building blocks to emulate and advance learning models. What can we learn from biological synapses to build better, more energy-efficient engineered hardware? Science research goals include modeling the circuits that both underlie complex learned behaviors and begin to emulate and advance state of the art learning rules. Engineering goals include building bio-inspired energy efficient hardware, new computing architectures, neural networks, and integrated high-density circuits.
11:00 – 11:10 Break
11:10 – 12:30 Lightning Talks: Selected Projects

  • Photonic Accelerators for Deep Learning: From Low-power Edge Devices to Data Centers
    Dirk Englund – Associate Professor, Department of Electrical Engineering and Computer Science

Modern silicon photonics opens new possibilities for high-performance quantum information processing, such as quantum simulation and high-speed quantum cryptography.

  • Energy Efficient Analog Neural Networks
    Joel Emer – Professor of the Practice, Department of Electrical Engineering and Computer Science
    Vivienne Sze – Associate Professor, Department of Electrical Engineering and Computer Science

This project pursues an integrated framework that includes energy-modeling and performance evaluation tools to systematically explore and estimate the energy-efficiency and performance of Analog Neural Network architectures with full consideration of the electrical characteristics of the synaptic elements and interface circuits.

  • Tiny Machine Learning for AI Technologies
    Song Han – Robert J. Shillman (1974) CD Assistant Professor, Department of Electrical Engineering and Computer Science

This project pursues efficient machine learning for mobile devices where hardware resources and energy budgets are very limited.

  • CMOS-Compatible Ferroelectric Synapse Technology for Analog Neural Networks
    Jesus Del Alamo – Donner Professor, Mac Vicar Faculty Fellow, Department of Electrical Engineering and Computer Science

This research project investigates a new ferroelectric synapse technology based on metal oxides that is designed to be fully back-end CMOS compatible and promises operation with great energy efficiency.

12:30 – 3:00 Lunch and Poster Session
The session will feature up to 40 posters on the state of the art MIT research on energy efficient systems and devices, computing at the edge and new hardware technologies. Please note that the poster session will not be available via zoom.
Featured Posters (list in progress)

  • A Threshold-Implementation-Based Neural-Network Accelerator Securing Model Parameters and Inputs Against Power Side-Channel Attacks
  • Alloying Conduction Channel-Based Memristor Crossbar Array for Reliable Analog Computing
  • An Equivalent Circuit Model of Electrochemical Artificial Synapses for Neuromorphic Computing
  • Architectural Evaluation of Processing-In-Memory Systems
  • CMOS-Compatible Ferroelectric Synapse Technology
  • Delocalized Photonic Deep Learning on the Internet’s Edge
  • Electrochemical Artificial Synapses Based on Intercalation of Mg2+ Ions
  • Enhanced Neuromorphic Devices Through Fundamental Understanding of Dynamics of Oxygen Vacancies
  • Fourier Transformed Spectral Imaging Via Spatially Modulated Diffractive Optical Element Arrays
  • Interpretability-Aware Redundancy Reduction for Vision Transformer
  • Lower Power and Accurate Analog Computing by Hardware Friendly Algorithms
  • MCUNetV2: Memory-Efficient Patch-Based Inference for Tiny Deep Learning
  • Microtel: A Platform to Accurately Measure Micromobility Stress
  • NAAS: Neural Accelerator Architecture Search
  • Network Augmentation for Tiny Deep Learning
  • Neuromorphic Sensor Fusion for Next-Generation Edge Computing
  • PointAcc: Efficient Point Cloud Accelerator
  • Reconfigurable Heterogeneous Integration Enabled by Stackable Chips with Embedded Artificial Intelligence
  • Sub-Angstrom Super-Resolution Imaging of Site-Aligned Quantum Emitter Arrays with Machine Learning Techniques
  • Sparsloop: An Analytical Approach to Sparse Tensor Accelerator Modeling
  • SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
  • TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning
  • TorchQuantum: Efficient Quantum Machine Learning System
  • TorchSparse: Efficient Point Cloud Inference Engine
  • Ultra-Low Power Natural Language Processor with Energy-Adaptive Model Configuration and Dynamic Memory Gating