About

I’m a 5th-year Ph.D. student at the Computer Science Department of University of Maryland, College Park, where I’m fortunate to be advised by Prof. Hal Daumé III.

My research focuses on enhancing the trustworthiness of (visual) language models by tackling challenges in uncertainty, pragmatics, and alignment. I aim to equip these models with self-reasoning and self-improvement capabilities, contributing to the broader vision of human-centered AI. My representative projects include:
(i) Correcting hallucinations and communicating uncertainties [EMNLP 24, EMNLP 23]
(ii) Fostering pragmatic language generation by simulating human behavior [ACL 23]
(iii) Generating faithful free-text explanations [EMNLP 25]
(iv) Improving visual-language alignment for long context understanding [ACL 25]

Previously, I interned at Microsoft Research, working on personalization, and at Honda Research Institute, focusing on visual language models. I also worked as a staff scientist at BBN Technologies on cross-lingual information retrieval.

I’ll be on the job market in 2026, and am actively exploring full-time opportunities or internships with potential for return (no visa sponsorship needed). Feel free to reach out for an informal chat!

News

  • [Nov 2025] My PhD proposal is: Toward Faithful and Pragmatic Language Models for Human-Centered AI Agents
  • [Aug 2025] New paper on A Necessary Step toward Faithfulness: Measuring and Improving Consistency in Free-Text Explanations Accepted by EMNLP 2025. Code released.
  • [Jun 2025] Excited to start research internship at Microsoft Semantic Machines!
  • [Feb 2025] New paper on Can Hallucination Correction Improve Video-Language Alignment? Accepted by ACL 2025.
  • [Feb 2024] New paper on Successfully Guiding Humans with Imperfect Instructions by Highlighting Potential Errors and Suggesting Alternatives, accepted by EMNLP 2024 (Oral). Project Website
  • [Oct 2023] New paper on Hallucination Detection for Grounded Instruction Generation, accepted by EMNLP 2023. Project Website
  • [Dec 2022] New paper on Define, Evaluate, and Improve Task-Oriented Cognitive Capabilities for Instruction Generation Models. Accepted by ACL 2023, and ICML Theory-of-Mind Workshop 2023 (Outstanding Paper Award). Project Website