Song Han

Associate Professor, MIT EECS


TinyML and Efficient Deep Learning

6.S965 • Fall 2022 • Course Website:

Have you found it difficult to deploy neural networks on mobile devices and IoT devices? Have you ever found it too slow to train neural networks? This course is a deep dive into efficient machine learning techniques that enable powerful deep learning applications on resource-constrained devices. Topics cover efficient inference techniques, including model compression, pruning, quantization, neural architecture search, and distillation; and efficient training techniques, including gradient compression and on-device transfer learning; followed by application-specific model optimization techniques for videos, point cloud, and NLP; and efficient quantum machine learning. Students will get hands-on experience implementing deep learning applications on microcontrollers, mobile phones, and quantum machines with an open-ended design project related to mobile AI. This course is open to the public and each lecture is live streamed on Youtube. 

  • Time: Tuesday/Thursday 3:30-5:00 pm Eastern Time
  • Location: 36-156
  • Online lectures: The lectures are available on YouTube.
  • Office Hour: Wednesday 5:00-6:00 pm Eastern Time, 38-344 Meeting Room
  • Resources: MIT HAN Lab, Github, TinyML, MCUNet, OFA

Fall 2022: 6.S965 TinyML and Efficient Deep Learning:

Spring 2022: 6.004 Computation Structures

Fall 2021: 6.004 Computation Structures

Spring 2021: 6.004 Computation Structures

Fall 2020: 6.004 Computation Structures

Spring 2020: 6.036 Introduction to Machine Learning

Fall 2019: 6.uat Oral Communication

Spring 2019: 6.036 Introduction to Machine Learning

Fall 2018: 6.004 Computation Structures