|
|
|
|
|
|
|
|
|
To assist human navigation with AI-generated instructions, we develop HEAR model to highlight hallucination spans and suggest possible corrections. The green box marks the ground-truth destination, and the orange highlight shows the hallucination span with correction suggestions in dropdown menu. |
|
We address the challenge of leveraging imperfect language models to guide human decision-making in the context of a grounded navigation task. We show that an imperfect instruction generation model can be complemented with an effective communication mechanism to become more successful at guiding humans. The communication mechanism we build comprises models that can detect potential hallucinations in instructions and suggest practical alternatives, and an intuitive interface to present that information to users.
We show that this approach reduces the human navigation error by up to 29% with no additional cognitive burden. This result underscores the potential of integrating diverse communication channels into AI systems to compensate for their imperfections and enhance their utility for humans. |
|
Our hallucination detection and remedy model (HEAR). For hallucination detection, it takes a language instruction and a visual route as input, to determine if a highlighted span in the instruction is hallucination to the visual route. For hallucination remedy, it determines if a highlighted span is intrinsic or extrinsic, followed by ranking the correction suggestions. Both classifiers are trained using contrastive learning objective. |
Human experiments show highlights and suggestions improve human navigation performance, by using our model. |
|
Acknowledgements |