Application / Inference
OmniTrack forms a two-stage pipeline for both offline and online use: the physical motion generation stage filters one or multiple motion clips via simulator rollouts to produce physically feasible, dynamics-consistent trajectories, and the general motion tracking stage delivers stable long-horizon tracking across diverse behaviors.
For online inference, real-time commands from mocap, VR headsets, or other sources are first refined in simulation (e.g., IsaacLab or MuJoCo) and then fed to the tracking policy for joint-level real-robot control, enabling robust, high-dynamic teleoperation under continuously varying user input.