Ready to apply? Let us help you stand out.
LiteLLM is an open-source LLM Gateway with 28K+ stars on GitHub and trusted by companies like NASA, Rocket Money, Samsara, Lemonade, and Adobe. We’re rapidly expanding and seeking a performance engineer to help scale the platform to handle 5K RPS (Requests per second). We’re based in San Francisco.
What is LiteLLM
LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format
We just hit $2.5M ARR and have raised a $1.6M seed round from Y Combinator, Gravity Fund and Pioneer Fund. You can find more information on our website, Github and Technical Documentation.
About the Role
We're looking for a Forward Deployed Engineer to embed with our key customers, helping them successfully deploy and scale LiteLLM in production. You'll work directly at customer sites (remotely), troubleshooting complex technical issues, optimizing their infrastructure, and ensuring they extract maximum value from the platform.
This role is ideal for someone who thrives in dynamic, customer-facing environments, enjoys solving production-level challenges in real-time, and can translate customer needs into actionable product improvements.
Responsibilities
Why Work At LiteLLM?
LiteLLM provides an open source Python SDK and Python FastAPI Server that allows calling 100+ LLM APIs (Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic) in the OpenAI format
We have raised $1.6M in Seed funding from top investors (Y Combinator, Gravity Fund and Pioneer Fund), generate $10M+ in ARR and growing exponentially, and are meaningfully profitable.
You can find more information on our website, Github and Technical Documentation.
Salary
$80,000 - $120,000
Location
San Francisco, CA, US
Experience
0+ years