CVPR 2024
OVER-NAV: Elevating Iterative Vision-and-Language Navigation with Open-Vocabulary Detection and StructurEd Representation
Ganlong Zhao, Guanbin Li, Weikai Chen, Yizhou Yu
CVPR 2024

Abstract


Recent advances in Iterative Vision-and-Language Navigation (IVLN) introduce a more meaningful and practical paradigm of VLN by maintaining the agent’s memory across tours of scenes. Although the long-term memory aligns better with the persistent nature of the VLN task, it poses more challenges on how to utilize the highly unstructured navigation memory with extremely sparse supervision. Towards this end, we propose OVER-NAV, which aims to go over and beyond the current arts of IVLN techniques. In particular, we propose to incorporate LLMs and open-vocabulary detectors to distill key information and establish correspondence between multi-modal signals. Such a mechanism introduces reliable cross-modal supervision and enables onthe-fly generalization to unseen scenes without the need of extra annotation and re-training. To fully exploit the interpreted navigation data, we further introduce a structured representation, coded Omnigraph, to effectively integrate multi-modal information along the tour. Accompanied with a novel omnigraph fusion mechanism, OVER-NAV is able to extract the most relevant knowledge from omnigraph for a more accurate navigating action. In addition, OVER-NAV seamlessly supports both discrete and continuous environments under a unified framework. We demonstrate the superiority of OVER-NAV in extensive experiments.

 

 

Framework


 

 

Experiment


 

 

 

Conclusion


We propose an open-vocabulary-based method, OVERNAV for Iterative Vision-Language Navigation. OVERNAV incorporates LLMs and an Open-Vocabulary detector to construct an omnigraph, which consists of viewpoints and connections with keywords to describe the distribution of key objects that are important for the agent’s navigation. Extensive experiments in both discrete and continuous environments demonstrate that omnigraph is a superior and more general structured memory to memorize and describe the navigation scene, enabling the agent to utilize the navigation history of previous episodes in IVLN for better performance.