VISAGE Internal Actuators

AI, Hardware,
& Autonomy Hub

Intelligence that lives in the body.

VISAGE Foundation Models

We train robots
the way humans
learn.


Our research merges the software intelligence of the Lab with the hardware realization of our Engineering teams. This unified hub spans foundation models for robotics, imitation learning from human demonstration, real-time motor control, and sim-to-real transfer.

By doing, failing, and refining in the real world, our robots acquire skills that generalize across environments, objects, and tasks.

Project Alpha: Dexterous Manipulation at Scale

Combining high-frequency whole-body control with learned policies to enable VISAGE to handle fragile objects, use human tools, and adapt to shifting weights dynamically without dropping payloads. Teleoperation data is currently being collected at 10x speed.

Imitation Learning Tactile Feedback

Project Beta: Agile Bipedal Locomotion

Pushing the limits of Model Predictive Control (MPC) and Reinforcement Learning (RL) to navigate uneven terrain, recover from strong shoves, and transition seamlessly between walking, crouching, and load-carrying gaits.

Reinforcement Learning Model Predictive Control Sim-to-Real

Project Gamma: Semantic World Modeling

Bridging natural language instructions and physical robot behavior through Vision-Language-Action (VLA) models. This enables end-users to command VISAGE with open-ended speech like "Please clean up the coffee spill" without writing a single line of code.

VLA Models Language Grounding Semantic Mapping
VISAGE Standing

See our work in action.

Explore lab demos and research publications.

Get in Touch ↗