About

I’m a 5th-year Ph.D. student at the Computer Science Department of University of Maryland, College Park, where I’m fortunate to be advised by Prof. Hal Daumé III.

My research focuses on improving the communication efficacy of (visual) language models by addressing challenges in uncertainty, explainability, pragmatics, and alignment. I aim to equip these models with self-improvement capabilities that minimize reliance on data annotation to advance human-AI collaboration. My representative work include:
(i) Detecting hallucinations and communicating rich uncertainties to improve human task performance [EMNLP 24, EMNLP 23]
(ii) Fostering pragmatic communication through human behavior simulation using RL agents [ACL 23]
(iii) Generating faithful free-text explanations [EMNLP 25]
(iv) Improving visual-language alignment for long context understanding [ACL 25]

Previously, I interned at Microsoft Research, working on personalization, and at Honda Research Institute, focusing on visual language models. I also worked as a staff scientist at BBN Technologies on cross-lingual information retrieval.

I’ll be on the job market in 2026, and am actively exploring full-time opportunities. Feel free to reach out for an informal chat!

News

  • [Nov 2025] My PhD proposal is: Methods for Improving the Communication Efficacy of Language Models: Faithfulness and Pragmatics
  • [Aug 2025] New paper on A Necessary Step toward Faithfulness: Measuring and Improving Consistency in Free-Text Explanations Accepted by EMNLP 2025. Code released.
  • [Jun 2025] Excited to start research internship at Microsoft Semantic Machines!
  • [Feb 2025] New paper on Can Hallucination Correction Improve Video-Language Alignment? Accepted by ACL 2025.
  • [Feb 2024] New paper on Successfully Guiding Humans with Imperfect Instructions by Highlighting Potential Errors and Suggesting Alternatives, accepted by EMNLP 2024 (Oral). Project Website
  • [Oct 2023] New paper on Hallucination Detection for Grounded Instruction Generation, accepted by EMNLP 2023. Project Website
  • [Dec 2022] New paper on Define, Evaluate, and Improve Task-Oriented Cognitive Capabilities for Instruction Generation Models. Accepted by ACL 2023, and ICML Theory-of-Mind Workshop 2023 (Outstanding Paper Award). Project Website