Morph builds the fastest LLM code-editing inference engine in the world. We hit 10,500 tok/sec per request on NVIDIA hardware.
Our stack powers high-throughput AI workflows for vibe coding apps, devtools, PR bots, and IDEs.
We’re hiring a founding ML Researcher to push the limits of model capability, throughput, and reliability across inference, retrieval, and edit application. This is a research role that ships. If your work cannot survive contact with production, it does not count here.
We’re looking for someone with broad, T-shaped spikey experience across research, systems, and product, plus a deep spike in modern LLM training and inference. You bring taste and judgment. AI can accelerate execution. It cannot replace those.
Fast Apply - Edit Files Faster - 10,500 tok/sec WarpGrep - Fast Context - 5x faster agentic code search subagent. eliminates context rot Glance- Videos of AI testing your PR, embedded in GitHub
Salary
$6,000 - $10
Location
San Francisco, CA, US
Investors
Tejas Bhakta
Tejas Bhakta
LinkedInNo applications, no recruiter spam. Just the intro.
A few questions to make sure this role is the right shape for you. Two minutes.
I write the intro, send it to the founder, and handle the back-and-forth.
If they’re a yes, I book the chat. You show up — that’s the whole job-hunt.