Learning the RoPEs: Better 2D and 3D Position Encodings with STRING
Abstract
Lay Summary
This paper introduces STRING, a new and improved method for AI to understand the position of items, especially in 2D images and 3D scenes. Current AI models (Transformers) grasp content but struggle with order or location. STRING builds upon a popular method called RoPE but is more general and better suited for multi-dimensional data. It retains RoPE's key benefits—encoding each item's position independently ("separability") and focusing on relative distances ("translational invariance")—while being more powerful. The paper proves STRING is theoretically the most comprehensive approach of its kind under certain conditions. Crucially, it delivers significant performance gains in practical applications like object detection and robotics control, where efficiently representing 2D/3D information is vital. In short, STRING helps AI "see" and understand spatial arrangements more effectively.