Intelligence that lives in the body.
Our research merges the software intelligence of the Lab with the hardware realization of our Engineering teams. This unified hub spans foundation models for robotics, imitation learning from human demonstration, real-time motor control, and sim-to-real transfer.
By doing, failing, and refining in the real world, our robots acquire skills that generalize across environments, objects, and tasks.
Combining high-frequency whole-body control with learned policies to enable VISAGE to handle fragile objects, use human tools, and adapt to shifting weights dynamically without dropping payloads. Teleoperation data is currently being collected at 10x speed.
Pushing the limits of Model Predictive Control (MPC) and Reinforcement Learning (RL) to navigate uneven terrain, recover from strong shoves, and transition seamlessly between walking, crouching, and load-carrying gaits.
Bridging natural language instructions and physical robot behavior through Vision-Language-Action (VLA) models. This enables end-users to command VISAGE with open-ended speech like "Please clean up the coffee spill" without writing a single line of code.