We’re hiring Machine Learning Infrastructure Engineers to build the systems that make large-scale model training actually work. This role is for people who enjoy operating at scale—owning distributed training, core ML infrastructure, and fast iteration loops across hundreds of GPUs. If you’ve built or run large training systems in PyTorch or JAX and care about things like sharding, parallelism, and performance, you’ll feel at home here. You’ll work closely with researchers to remove friction, improve reliability, and make it easier to train, evaluate, and deploy models that show up in real systems.
Salary
$95,000 - $205,000
Location
Palo Alto, California, United States