CVPR 2025
DSPNet: Dual-vision Scene Perception for Robust 3D Question Answering
Jingzhou Luo, Yang Liu, Weixing Chen, Zhen Li, Yaowei Wang, Guanbin Li, Liang Lin
CVPR 2025

github.com/LZ-CH/DSPNet

Abstract


3D Question Answering (3D QA) requires the model to comprehensively understand its situated 3D scene described by the text, then reason about its surrounding environment and answer a question under that situation. However, existing methods usually rely on global scene perception from pure 3D point clouds and overlook the importance of rich local texture details from multi-view images. Moreover, due to the inherent noise in camera poses and complex occlusions, there exists significant feature degradation and reduced feature robustness problems when aligning 3D point cloud with multi-view images. In this paper, we propose a Dual-vision Scene Perception Network (DSPNet), to comprehensively integrate multi-view and point cloud features to improve robustness in 3D QA. Our Text-guided Multi-view Fusion (TGMF) module prioritizes image views that closely match the semantic content of the text. To adaptively fuse back-projected multi-view images with point cloud features, we design the Adaptive Dual-vision Perception (ADVP) module, enhancing 3D scene comprehension. Additionally, our Multimodal Context-guided Reasoning (MCGR) module facilitates robust reasoning by integrating contextual information across visual and linguistic modalities. Experimental results on SQA3D and ScanQA datasets demonstrate the superiority of our DSPNet.

 

 

Framework


 

 

Experiment


 

Conclusion


In this paper, we propose DSPNet, a dual-vision network for 3D QA. DSPNet integrates multi-view image features via a Text-guided Multi-view Fusion module. It adaptively fuses image and point cloud features into a unified representation using an Adaptive Dual-vision Perception module. Finally, a Multimodal Context-guided Reasoning module is introduced for comprehensive 3D scene reasoning. Experimental results have demonstrated that DSPNet outperforms existing methods with better alignment and closer semantic structure between predicted and reference answers. A limitation of DSPNet is that it relies on pre-scanned point clouds and pre-captured multi-view images, which may limit its applicability in dynamic environments.