Ji Lin graduated from MIT HAN Lab in Dec. 2023 and joined OpenAI as a research scientist. His research focuses on efficient deep learning computing, systems for ML and recently, accelerating large language models (LLMs). Ji is pioneering the research in the field of TinyML. His research has received over 10,000 citations on Google Scholar and over 8,000 stars on GitHub. His work on LLM quantization (AWQ) received the best paper award at MLSys'24. AWQ has been widely adopted by NVIDIA, Intel, Microsoft, AMD, HuggingFace, Berkeley to accelerate LLM inference. AWQ-quantized LLMs have been downloaded by more than 6 million times on HuggingFace. Ji is an NVIDIA Graduate Fellowship Finalist in 2020, and Qualcomm Innovation Fellowship recipient in 2022. His work has been covered by MIT Tech Review, MIT News (twice on MIT homepage and four times on MIT News), WIRED, Engadget, VentureBeat, etc.