Inferact's mission is to grow vLLM as the world's AI inference engine and accelerate AI progress by making inference cheaper and faster. Founded by the creators and core maintainers of vLLM, we sit at the intersection of models and hardware—a position that took years to build.
About
the Role We're looking for a performance engineer to squeeze every FLOP out of modern accelerators. You'll write the kernels and low-level optimizations that make vLLM the fastest inference engine in the world. Your code will run on hundreds of accelerator types, from NVIDIA GPUs to emerging silicon. When hardware vendors develop new chips, they integrate with vLLM. You'll work directly with these teams to ensure we're extracting maximum performance from every generation of hardware. Skills and
Qualifications Minimum
qualifications: Bachelor's degree or equivalent experience in computer science, engineering, or similar. Deep experience writing CUDA kernels or equivalent (CuTeDSL, Triton, TileLang, Pallas). Strong understanding of GPU architecture: memory hierarchy, warp scheduling, tiling, tensor cores. Proficiency in C++ and Python with demonstrated ability to write high-performance code. Experience with profiling tools (Nsight, rocprof) and performance optimization methodologies. Obsession with benchmarks and squeezing every percentage point of speedup. Preferred
qualifications: Experience with ML-specific kernel optimization (FlashAttention, fused kernels). Knowledge of quantization techniques (INT8, FP8, mixed-precision). Familiarity with multiple accelerator platforms (NVIDIA, AMD, TPU, Intel). Experience with compiler technologies (LLVM, MLIR, XLA).
Bonus points if you have: Kernel-related contributions to vLLM or other inference engine projects. Contributions to open-source GPU, ML systems, or compiler optimization projects Written deep technical blogs on GPU optimization. Logistics Location: This role is based in San Francisco, California. Will consider remote in the US for exceptional candidates.
Compensation: Depending on background, skills, and experience, the expected annual salary range for this position is $200,000 - $400,000 USD + equity. Visa sponsorship: We sponsor visas on a case-by-case basis.
Benefits : Inferact offers generous health, dental, and vision
benefits as well as 401(k) company match.
Member of Technical Staff, Exceptional Generalist (Remote)
Member of Technical Staff, Cloud Orchestration
Member of Technical Staff, Performance and Scale
Member of Technical Staff, Inference
Salary
$200,000 - $400,000
Location
San Francisco, California, United States
Last stage
Seed
Investors
Simon Mo
Co-Founder & CEO
Woosuk Kwon
Co-Founder & CTO
Kaichao You
Co-Founder & Chief Scientist
No applications, no recruiter spam. Just the intro.
A few questions to make sure this role is the right shape for you. Two minutes.
I write the intro, send it to the founder, and handle the back-and-forth.
If they’re a yes, I book the chat. You show up — that’s the whole job-hunt.