Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build.
About
the Role We're looking for an cloud orchestration engineer to build the operational backbone that keeps vLLM running reliably at massive scale. You'll design the systems for cluster management, deployment automation, and production monitoring that enable teams worldwide to serve AI models without friction. You'll ensure that vLLM deployments are observable, debuggable, and recoverable, turning operational complexity into infrastructure that just works. Skills and
Qualifications Minimum
qualifications: Bachelor's degree or equivalent experience in computer science, engineering, or similar. Strong experience with Kubernetes and container orchestration at scale. Experience designing and implementing custom Kubernetes operators. Proficiency in Python/Rust/Go and infrastructure-as-code tools (Terraform, Helm, etc). Experience managing GPU clusters and debugging hardware issues. Ability to work across cloud platforms (AWS, GCP, Azure) and on-premise infrastructure. Preferred
qualifications: Experience with ML-specific orchestration tools (Ray, Slurm). Knowledge of GPU scheduling, multi-tenancy, and resource optimization. Familiarity with vLLM deployment patterns and configuration. Track record of improving operational reliability for ML systems.
Bonus points if you have: Experience deploying inference systems on large-scale GPU (1,000+) clusters. Logistics Location: This role is based in San Francisco, California. Will consider remote in the US for exceptional candidates.
Compensation: Depending on background, skills, and experience, the expected annual salary range for this position is $200,000 - $400,000 USD + equity. Visa sponsorship: We sponsor visas on a case-by-case basis.
Benefits : Inferact offers generous health, dental, and vision
benefits as well as 401(k) company match.
Member of Technical Staff, Exceptional Generalist (Remote)
Member of Technical Staff, Performance and Scale
Member of Technical Staff, Inference
Member of Technical Staff, Kernel Engineering
Salary
$200,000 - $400,000
Location
San Francisco, California, United States
Last stage
Seed
Investors
Simon Mo
Co-Founder & CEO
Woosuk Kwon
Co-Founder & CTO
Kaichao You
Co-Founder & Chief Scientist
No applications, no recruiter spam. Just the intro.
A few questions to make sure this role is the right shape for you. Two minutes.
I write the intro, send it to the founder, and handle the back-and-forth.
If they’re a yes, I book the chat. You show up — that’s the whole job-hunt.