About
News
Publications
Blog
Course
Awards
Talks
Media
Team
Gallery
Efficient AI Computing,
Transforming the Future.
Media
A new way to let AI chatbots converse all day without crashing
MIT News, MIT Homepage
Feb 13, 2024
StreamingLLM
Technique enables AI on edge devices to keep learning over time
MIT News
Nov 16, 2023
PockEngine
StreamingLLM shows how one token can keep AI models running smoothly indefinitely
VentureBeat
Oct 5, 2023
StreamingLLM
MIT Researchers Introduce A Novel Lightweight Multi-Scale Attention For On-Device Semantic Segmentation
marktechpost
Sep 15, 2023
EfficientViT
AI model speeds up high-resolution computer vision
MIT News, MIT Homepage
Sep 13, 2023
EfficientViT
Smaller is Better: Q8-Chat LLM is an Efficient Generative AI Experience on Intel® Xeon® Processors
Intel News
Aug 7, 2023
SmoothQuant
Learning on the edge
MIT News, MIT Homepage
Oct 4, 2022
On-Device Training
Making quantum circuits more robust
MIT News
Mar 21, 2022
QuantumNAS
Tiny machine learning design alleviates a bottleneck in memory usage on internet-of-things devices
MIT News
Dec 8, 2021
MCUNet-v2
Bringing OFA (Once-for-All) to FPGA
Xilinx News
Jul 13, 2021
OFA
Load More