































































One Robot builds task-specific world models and an evaluation platform for robot manipulation policies.
Training end-to-end policies for robots is vibes-based today. Teams collect data, train, deploy on a real robot, find out what fails, collect more, retry. We replace the trial-and-error with rigorous validation that tells you where your policy will fail and what data to collect to fix it.
Robotics can't industrialize without an evaluation layer. We're building it.
We're solving challenging technical problems around long-horizon autoregressive generation, world model controllability, and closing the sim-to-real gap. We work with real customer data, real failures, and real deployment pressure.
We're based in San Francisco, backed by Accel, YC, several exited founders, and engineering leaders at leading AI companies.
We're small and deliberately so. Everyone is an IC with deep ownership of a wide surface area. The culture is fast iteration and direct responsibility.
Hemanth Sarabu and Elton Shon co-founded One Robot after leading robot learning together at Industrial Next (YC W22), bringing experience from Google, NASA JPL, and Tesla.
We're expanding the platform into policy training — building the components that let policies validate and improve through our world model. You'll train manipulation policies — VLAs, end-to-end imitation, RL — and push the world model and eval forward.
What you'll do:
Requirements:
One Robot builds simulation environments that are realistic to see and realistic to interact with, so robotics teams can train and evaluate robot policies without being bottlenecked by robot time.
Today, improving a VLA often means more real-world hours: setting up the scene, running trials, resetting, and repeating. This loop is slow, expensive, and hard to scale. For example, material handling and manufacturing assembly tasks, models need far more training and evaluation data than teams can collect in the real world.
We use task-specific data to build world model-based simulation environments for hard manipulation tasks (for example, textiles and box folding). These environments help teams run more training and evals, find failure modes faster, and accelerate iteration on action policies with less dependence on real-world data collection and robot availability.
Salary
$150,000 - $275,000
Equity
0.5% - 2%
Location
San Francisco, CA, US