Dual Focus-Attention Transformer for Robust Point Cloud Registration
Kexue Fu*, Mingzhi Yuan*, Changwei Wang, Weiguang Pang, Jing Chi, Manning Wang☨, Longxiang Gao☨
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025
Abstract
Recently, coarse-to-fine methods for point cloud registration have achieved great success, but few works deeply explore the impact of feature interaction at both coarse and fine scales. By visualizing attention scores and correspondences, we find that existing methods fail to achieve effective feature aggregation at the two scales during the feature interaction. To tackle this issue, we propose a Dual Focus-Attention Transformer framework, which only focuses on points relevant to the current point for feature interaction, avoiding interactions with irrelevant points. For the coarse scale, we design a superpoint focus-attention transformer guided by sparse keypoints, which are selected from the neighborhood of superpoints. For the fine scale, we only perform feature interaction between the point sets that belong to the same superpoint. Experiments show that our method achieve the state-of-the-art performance on three standard benchmarks.