(Bill) Yuchen Lin

I evaluate Large Language Models (LLMs),
build AI agents, and study the science of LLMs.
  Postdoc Researcher, Allen Institute for AI, Mosaic. Hosted by Prof. Prof. Yejin Choi (University of Washington) (23'Feb-Now)
Research Intern, Meta FAIR Lab. Hosted by Dr. Scott Yih (21' May - Dec)
Research Intern, Google AI. Hosted by Dr. William Cohen (20') & Sandeep Tata (19')
πŸŽ“ PhD. in Computer Science,
University of Southern California (18'-22')
πŸŽ“ BSc. in Computer Science (IEEE Honor Class),
Shanghai Jiao Tong University (14'-18')
πŸͺ‘ Area Chair: ACL'23, EMNLP'23; Workshop Organizer: FL4NLP@ACL22, CSRR@ACL22, CSKB@AKBC21, TamingLLM@SIGDIAL+INLG'23
πŸ—£οΈ Tutorials: ACL 2023; ACL 2022; WSDM 2023;
πŸ₯‡ Best Paper Award, TrustNLP 2021; Best Paper Runner-Up, WWW 20220; Best Thesis @ SJTU.
Upcoming trip: ICLR 2024 @ Vienna, Austria.
24-03-08 Introducing AI2 🦁 WildBench! A dynamic LLM benchmark for challenging tasks from real users. [Leaderboard] | [Tweet]
24-03-06 2 new preprints: πŸ” ETO (Continual DPO for Agent Training) and πŸ’» OpenCI (open code interpreter).
24-02-16 2 new preprints: 🧩 L3GO (with AI2 intern Yutaro Yamada from Yale); πŸ›‘οΈ SafeDecoding (led by Zhangchen Xu at UW).
24-02-09 Check out our Vision Arena demo on HuggingFace! You can test many Vision LMs side by side here!
24-01-30 Invited talk at UT Asutin (Host: Prof. Jessy Li at LIN 393).
24-01-16 Accepted by ICLR'24: πŸͺ„ The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning.
23-12-01 We release the PairRM-0.4B that is based on LLM-Blender. It achieves great performance on the AlpacaEval Leaderboard: [picture] [tweet 1] [tweet 2]. Kudos to Dongfu Jiang's great work!
23-11-15 New preprint: πŸͺ„ Lumos Agent (with AI2 intern Da Yin from UCLA)
23-11-01 New preprint: 🍲 Personalized RLHF (with AI2 intern Joel Jang from UW).
23-10-15 New preprints: 🐯 TIGER-Score (reference-free NLG evaluation) and πŸƒ Suspicion-Agent (playing imperfect-information games).
23-09-21 Our SwiftSage and FnF papers got in NeurIPS 2023 as spotlights! 🌟 🌟
23-07-29 Check out our new work (with Chengsong and Qian): LoraHub for efficient cross-task generalization.
23-07-09 Co-presented an tutorial at ACL 2023 on Complex Reasoning in Natural Language.
23-06-18 Will serve as an Area Chair at EMNLP 2023.
23-01-01 Will serve as an Area Chair for ACL 2023!

Yuchen Lin is a Postdoc Researcher at Allen Institute for AI (AI2), working with Prof. Yejin Choi (University of Washington) at the Mosaic Team of AI2. Yuchen's primary interest lies in studying the science of large language models (LLMs), developing AI agents for complex interactive tasks, and evaluating the reasoning and alignment ability of LLMs. His research aims to teach machines how to think, plan, and act like humans. Moreover, Yuchen's work focuses on enhancing the robustness, safety, and generalization of LLMs through retrieval augmentation, continual learning, ensemble learning, etc.

Yuchen received Best Paper Award Runner-up at The Web Conference 2020, best paper award at TrustNLP 2021, and got selected as AI Rising Star by Baidu Scholar. He has given several tutorials at ACL and WSDM, and served as an Area Chair for ACL 2023 and EMNLP 2023. He received his PhD from University of Southern California in 2022, advised by Prof. Xiang Ren. Previously, he got his bachelor’s degree from the IEEE Honored Class at Shanghai Jiao Tong University (2014-2018) and won the Best Thesis Award, advised by Prof. Kenny Zhu. He was a research intern at Facebook AI Research (FAIR) (2021 with Scott Yih), Google AI (2020 with William Cohen, 2019 with Sandeep Tata), and Microsoft Research Asia (2017-2018).

🀝 I'm very open to collaboration! Please feel free to email me. :D