Research Scientist, Allen Institute for AI. (2023-Now) |
Affiliate Assistant Professor, University of Washington. (2024-Now) |
Research Intern, Meta FAIR Lab. Hosted by Dr. Scott Yih (21' May - Dec) |
Research Intern, Google AI. Hosted by Prof. William Cohen (20') & Sandeep Tata (19') |
π PhD. in Computer Science, University of Southern California (18'-22') |
π BSc. in Computer Science (IEEE Honor Class), Shanghai Jiao Tong University (14'-18') |
πͺ Senior AC: ACL'25; Area Chair: ICLR'25, ACL'23, EMNLP'23, EMNLP'24; Workshop Organizer: FL4NLP@ACL22, CSRR@ACL22, CSKB@AKBC21, TamingLLM@SIGDIAL+INLG'23 |
π£οΈ Tutorials: ACL 2023; ACL 2022; WSDM 2023; |
π₯ Best Paper Award, TrustNLP 2021; Best Paper Runner-Up, WWW 20220; Best Thesis @ SJTU. |
π Resume | π§ Email: bill@yuchenlin.xyz |
Bill Yuchen Lin is a Research Scientist at the Allen Institute for AI (Ai2). He's also an Affiliate Assistant Professor at the University of Washington (UW). His research focuses on aligning large language models (LLMs), training AI agents, reasoning, and multimodal LLMs, with a particular emphasis on post-training, evaluation, reward modeling, and synthetic data generation. He also works to deepen our core understanding of the science of language models and explore their limits, with experience in improving the safety, generalization, robustness, and efficiency of LLMs. Lin has been recognized with several honors, including the Best Paper Award Runner-up at The Web Conference 2020, the Best Paper Award at TrustNLP 2021, and recognition as an AI Rising Star by Baidu Scholar. He also serves as a Senior Area Chair for the Association for Computational Linguistics (ACL) and an Area Chair for the International Conference on Learning Representations (ICLR). Lin earned his Ph.D. from the University of Southern California in 2022. He completed his bachelorβs degree in the IEEE Honor Class at Shanghai Jiao Tong University (2014β2018), where he received the Best Thesis Award. |
24-09-19 Release MagpieLM (4B & 8B), state-of-the-art chat models with fully open alignment recipe. [X Post] |
24-09-11 Release ZeroEval a leaderboard of LLMs for reasoning. [X Post] |
24-08-27 Release WildVision datasets: WV-Chat, WV-Battle, and WV-Bench. [X Post] |
24-08-19 Will serve as an Area Chair for ICLR 2025. |
24-06-29 Will serve as a Senior Area Chair for ACL 2025. |
24-05-08 Three ACL 2024 Main Conference papers: Agent Lumos, ETO, and SafeDecoding! |
24-05-01 Will serve as an Area Chair for EMNLP 2024. |
24-03-08 Introducing AI2 π¦ WildBench! A dynamic LLM benchmark for challenging tasks from real users. [Leaderboard] | [Tweet] |
24-03-06 2 new preprints: π ETO (Continual DPO for Agent Training) and π» OpenCI (open code interpreter). |
24-02-16 2 new preprints: 𧩠L3GO (with AI2 intern Yutaro Yamada from Yale); π‘οΈ SafeDecoding (led by Zhangchen Xu at UW). |
24-02-09 Check out our Vision Arena demo on HuggingFace! You can test many Vision LMs side by side here! |
24-01-30 Invited talk at UT Austin (Host: Prof. Jessy Li at LIN 393). |
24-01-16 Accepted by ICLR'24: πͺ The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning. |
23-12-01 We release the PairRM-0.4B that is based on LLM-Blender. It achieves great performance on the AlpacaEval Leaderboard: [picture] [tweet 1] [tweet 2]. Kudos to Dongfu Jiang's great work! |
23-11-15 New preprint: πͺ Lumos Agent (with AI2 intern Da Yin from UCLA) |
23-11-01 New preprint: π² Personalized RLHF (with AI2 intern Joel Jang from UW). |
23-10-15 New preprints: π― TIGER-Score (reference-free NLG evaluation) and π Suspicion-Agent (playing imperfect-information games). |
23-09-21 Our SwiftSage and FnF papers got in NeurIPS 2023 as spotlights! π π |
23-07-29 Check out our new work (with Chengsong and Qian): LoraHub for efficient cross-task generalization. |
23-07-09 Co-presented an tutorial at ACL 2023 on Complex Reasoning in Natural Language. |
23-06-18 Will serve as an Area Chair at EMNLP 2023. |
23-01-01 Will serve as an Area Chair for ACL 2023! |