TinyML & MCUNet [NeurIPS’20 spotlight][NeurIPS’21]:
– MIT Homepage & MIT News, Learning on the edge
– MIT News, Tiny machine learning design alleviates a bottleneck in memory usage on internet-of-things devices
– Wired, AI Algorithms Are Slimming Down to Fit in Your Fridge
– MIT News, System brings deep learning to “internet of things” devices
– IBM, New IBM-MIT system brings AI to microcontrollers – paving the way to ‘smarter’ IoT
SpAtten [HPCA’21]:
– MIT Homepage Spotlight, A language learning system that pays attention — more efficiently than ever before
QuantumNAS [HPCA’22]:
– MIT News, Making quantum circuits more robust
DiffAugment [NeurIPS’20]:
– Venture Beat, MIT researchers claim augmentation technique can train GANs with less data
Once-For-All Network [ICLR’20]:
– Venture Beat, MIT aims for energy efficiency in AI model training
– MIT News, Reducing the carbon footprint of artificial intelligence
– Qualcomm, Research from MIT shows promising results for on-device AI
Hardware-Aware Transformer [ACL’20]:
– MIT News, Shrinking deep learning’s carbon footprint
– Venture Beat, New AI technique speeds up language models on edge devices
Temporal Shift Module [ICCV’19]:
– NVIDIA, New MIT Video Recognition Model Dramatically Improves Latency on Edge Devices
– MIT Technology Review, Powerful computer vision algorithms are now small enough to run on your phone
– Engadget, MIT-IBM developed a faster way to train video recognition AI
– MIT News, Faster video recognition for the smartphone era
ProxylessNAS [ICLR’19]:
– IEEE Spectrum, Using AI to Make Better AI
– MIT News, Kicking neural network design automation into high gear