July 2018: “AMC: AutoML for Model Compression and Acceleration on Mobile Devices” accepted by ECCV’18. This paper use AI to do model compression, rather than rely on human heuristics to do it. AMC can automate the model compression process, achieve better compression ratio, and also be more sample efficient. It takes shorter time can do better than rule-based heuristics.
June 2018: Song presents invited paper “Bandwidth Efficient Deep Learning” at Design Automation Conference (DAC’18). The paper talks about techniques to save memory bandwith, networking bandwidth, and engineer bandwdith for efficient deep learning.
May 2018: “Path-Level Network Transformation for Efficient Architecture Search” accepted by Internatinal Conference on Machine Learning (ICML’18).
Feb 26, 2018: Song presented “Bandwidth Efficient Deep Learning: Challenges and Trade-offs” at FPGA’18 panel session.
Jan 29, 2018: Deep Gradient Compression is accepted by ICLR’18. This technique can reduce the communication bandwidth by 500x and improves the scalability of large-scale distributed training. [slides].
Oct 28 2016: Song received Best Poster Award at 2016 Stanford Cloud Workshop for his poster entiled “Deep Compression, EIE and DSD: Deep Learning Model Compression, Acceleration, and Regularization”.