ARticulate: Interactive Visual Guidance for Demonstrated Rotational Degrees of Freedom in Mobile AR

CHI 2025

Nhan (Nathan) Tran Ethan Yang Abe Davis
Cornell University, NY, USA
Paper Code (Soon)
ARticulate Teaser

Abstract

Mobile Augmented Reality (AR) offers a powerful way to provide spatially-aware guidance for real-world applications. In many cases, these applications involve the configuration of a camera or articulated subject, asking users to navigate several spatial degrees of freedom (DOF) at once. Most guidance for such tasks relies on decomposing available DOF into subspaces that can be more easily mapped to simple 1D or 2D visualizations. Unfortunately, different factorizations of the same motion often map to very different visual feedback, and finding the factorization that best matches a user's intuition can be difficult. We propose an interactive approach that infers rotational degrees of freedom from short user demonstrations. Users select one or two DOFs at a time by demonstrating a small range of motion, which we use to learn a rotational frame that best aligns with user control of the object. We show that deriving visual feedback from this inferred learned rotational frame leads to improved task completion times on 6DOF guidance tasks compared to standard default reference frames used in most mixed reality applications.

Paper Presentation

Same Math, Different Perspectives

Rotating Object, Fixed Camera Inferring joint center for rotating arm Fixed Object, Moving Camera Inferring subject center for camera movement Rotating Object, Fixed Camera Inferring joint center for rotating arm Fixed Object, Moving Camera Inferring subject center for camera movement

ARticulate infers axes of rotation from minimal user input:

Interactive Visualizer

How to Use

  1. Set ground truth center of rotation (CoR): Hover over bunny point cloud → click point → click "Set as CoR" (or manually enter X,Y,Z values)
  2. Rotate: Drag X,Y,Z sliders to rotate bunny in real-time
  3. Record poses: Click "Apply & Record Pose" to save current rotation (record 2-3+ poses with varied multi-axis rotations)
    💡 Tip: Single-axis rotations (e.g., only changes in Z) could create underdetermined systems with infinitely many solutions along the rotation axis, making it impossible to uniquely determine the (x, y, z) coordinates of the center of rotation. Change multiple axes (e.g., X=30°, Y=45°, Z=20°) for unique solutions.
  4. Estimate: Click "Estimate Center of Rotation" to see algorithm result (orange marker)
  5. Verify: Click "Show Ground Truth" to compare with exact solution (cyan marker)
Camera controls: Drag to rotate view • Scroll to zoom • Right-click to pan

Visibility

Center of Rotation

💡 Hover over 3D model to pick a point

Rotation Controls (radians)

0.00
0.00
0.00

Playback Poses

0 / 0

Estimation

🔬 Algorithm: Estimate center of rotation c using least squares:
  1. Extract rotation matrix Ri and translation ti
  2. Build system Mc = t where M = [I - R1; ...; I - Rn]
  3. Solve: c = argmin ||Mc - t||²
Record at least 2 different poses to estimate the center of rotation.

3D Visualization

Loading Stanford Bunny...

Color Legend

Red: Reference Points
Blue: Transformed Points
Purple: Center of Mass
Green: Ground Truth CoR
Orange: Estimated CoR

Acknowledgements

This work was partially supported by a National Science Foundation Faculty Early Career Development Grant under award #2340448.