Song Han

MIT EECS

Accelerated Deep Learning Computing

About

 

 

 

 

 

 

 

I am an assistant professor in EECS at MIT. I completed my Ph.D. at Stanford University advised by Prof. Bill Dally. I was a postdoctoral researcher at Google Brain before joining MIT.

My research focuses on energy-efficient deep learning computing, at the intersection between machine learning and computer architecture. As Moore’s law is slowing down, we often need to first tweak the algorithm to be hardware friendly (e.g. Deep Compression that can compress deep neural-nets by 10-50x), then design the specialized hardware for the target domain (e.g. EIE: Efficient Inference Engine that directly perform NN inference on the sparse, compressed model). Combining algorithm and hardware, the design space becomes very large, so we need AI-assisted design automation (e.g. ProxylessNAS that automatically search the optimal neural-net architecture for target hardware architecture). Several techniques has been adopted by the industry.


My recent research focus on efficient algorithm and hardware for computation-intensive AI applications. I am looking for PhD and UROP students interested in deep learning and computer architecture. Below are the research areas of HAN Lab:
H: High performance, High energy efficiency Hardware
A: AutoML, Architectures and Accelerators for AI
N: Novel algorithms for Neural Networks


Research Interests

In the post-ImageNet era, computer vision and machine learning researchers are solving more complicated AI problems using larger data sets driving the demand for more computation.
 However,  Moore’s Law is slowing down, Dennard scaling has stopped,  the amount of computation per unit cost and power is no longer increasing at its historic rate. This mismatch between supply and demand for computation highlights the need for co-designing efficient machine learning algorithms and domain-specific hardware architectures. The vast design space across algorithm and hardware is difficult to be explored by human engineers. Therefore, we need hardware-centric AutoML and design automation to bridge the gap. We are recently working on Hardware-Centric AutoML: ProxylessNAS [ICLR’19], AMC [ECCV’18], HAQ [CVPR’19].

I’m interested in application-driven, domain-specific computer architecture research. I’m interested in achieving higher efficiency by tailoring the hardware architecture to characteristics of the application domain, and also innovating on efficient algorithms that are hardware-friendly. My current research center around co-designing efficient algorithms and hardware systems for machine learning, to free AI from the power hungry hardware beasts and democratize AI to cheap mobile devices,  reducing the cost of running deep learning on data centers, as well as automating machine learning model design. I enjoy the research intersections across machine learning algorithms and computer architecture.

News Blog

  • March 2019: ProxylessNAS on MIT News: Kicking Neural Network Design Automation into High Gear and IEEE Spectrum: Using AI to Make Better AI.
     
  • March 2019: HAQ: Hardware-aware Automated Quantization with Multi-precision is accepted by CVPR’19 (oral). [paper]
    So far, ProxylessNAS [ICLR’19] => AMC [ECCV’18] => HAQ [CVPR’19] forms a pipeline of Hardware-Centric Design Automation for Efficient Neural Networks.  

  • Feb 2019: Song presented “Bandwidth-Efficient Deep Learning with Algorithm and Hardware Co-Design” at ISSCC’19 in the forum “Intelligence at the Edge: How Can We Make Machine Learning More Energy Efficient?

  • Jan 2019: Song is appointed to the Robert J. Shillman (1974) Career Development Chair.
  • Jan 2019: “Song Han: Democratizing artificial intelligence with deep compression” by MIT Industry Liaison Program. [article][video]
  • Dec 2018: Our work on Defensive Quantization: When Efficiency Meets Robustness is accepted by ICLR’19. Neural network quantization is becoming an industry standard to compress and efficiently deploy deep learning models. Is model compression a free lunch? No, if not treated carefully. We observe that the conventional quantization approaches are vulnerable to adversarial attacks. This paper aims to raise people’s awareness about the security of the quantized models, and we designed a novel quantization methodology to jointly optimize the efficiency and robustness of deep learning models. [paper]
  • Dec 2018: Our work on Learning to Design Circuits appeared at NeurIPS workshop on Machine Learning for Systems. Analog IC design relies on human experts to search for parameters that satisfy circuit specifications with their experience and intuitions, which is highly labor intensive and time consuming. This paper propose a learning based approach to size the transistors and help engineers to shorten the design cycle. [paper]
  • Dec 2018: Our work on HAQ: Hardware-aware Automated Quantization appeared at NeurIPS workshop on Machine Learning on the Phone and other Consumer Devices. HAQ leverages the reinforcement learning to automatically determine the quantization policy (bit width per layer), and we take the hardware accelerator’s feedback in the design loop. Rather than relying on proxy signals such as FLOPs and model size, we employ a hardware simulator to generate direct feedback (both latency and energy) to the RL agent. Compared with conventional methods, our framework is fully automated and can specialize the quantization policy for different neural network architectures and hardware architectures. [paper]
  • Dec 2018: Our work on ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware is accepted by ICLR’19. Neural Architecture Search (NAS) is computation intensive. ProxylessNAS saves the GPU hours by 200x than NAS, saves GPU memory by 10x than DARTS, while directly searching on ImageNet. ProxylessNAS is hardware-aware. It can design specialized neural network architecture for different hardware, making inference fast. With >74.5% top-1 accuracy, the measured latency of ProxylessNAS is 1.8x faster than MobileNet-v2, the current industry standard for mobile vision. [paper][code][demo]
  • Nov 2018: Our work on Efficient Video Understanding with Temporal Shift Module (TSM) is available on arXiv. Video understanding is more computation intensive than images and it is expensive to deploy. TSM uses 2D convolution’s computation complexity and achieves better temporal modeling ability than 3D convolution. Measured on P100 GPU, TSM achieved 1.8% higher accuracy at 8x lower latency and 12x higher throughput compared with I3D. TSM ranks the first on both Something-Something V1 and V2 leaderboards as of Nov 2018. [paper][demo]
  • Sep 2018: Song Han received Amazon Machine Learning Research Award.
  • Sep 2018: Song Han received SONY Faculty Award.
  • Sep 2018: Our work on AMC: AutoML for Model Compression and Acceleration on Mobile Devices is accepted by ECCV’18. This paper proposes learning-based method to perform model compression, rather than relying on human heuristics and rule-based methods. AMC can automate the model compression process, achieve better compression ratio, and also be more sample efficient. It takes shorter time can do better than rule-based heuristics. AMC compresses ResNet-50 by 5x without losing accuracy. AMC makes MobileNet-v1 2x faster with 0.4% loss of accuracy. [paper / bibTeX]
  • June 2018: Song presents invited paper “Bandwidth Efficient Deep Learning” at Design Automation Conference (DAC’18). The paper talks about techniques to save memory bandwidth, networking bandwidth, and engineer bandwidth for efficient deep learning.
  • Feb 26, 2018: Song presented “Bandwidth Efficient Deep Learning: Challenges and Trade-offs” at FPGA’18 panel session.
  • Jan 29, 2018: Deep Gradient Compression is accepted by ICLR’18. This technique can reduce the communication bandwidth by 500x and improves the scalability of large-scale distributed training. [slides].

Education

  • Ph.D. Stanford University, Sep. 2012 to Sep. 2017
  • B.S. Tsinghua University, Aug. 2008 to Jul. 2012

Contact

  • Email: FirstnameLastname [at] mit [dot] edu
  • PhD, UROP and summer intern applicants: please email han [dot] lab [dot] mit [at] gmail so that it won’t be filtered.

Google Scholar, YouTube, Twitter, Facebook, LinkedIn