Zep is the memory and context layer for AI agents. As Lead Forward Deployed Engineer, you'll embed with customer engineering teams to integrate Zep into their production agent systems: diagnosing context-quality failures, designing memory architectures around their data, and shipping the integrations that make their agents actually work in the wild.
This is an applied AI engineering role with a customer surface. We're not looking for ML researchers or data scientists. We're looking for engineers who have already lived through the messy reality of taking an agent from demo to production.
What you'll do
Own end-to-end delivery for strategic deployments: scope, design, build, rollout, stabilize.
Embed with customer engineers to integrate Zep into real systems: data, APIs, auth, infra.
Ship production code: integrations, reference implementations, performance and reliability fixes.
Help level up the FDE function: coach newer FDEs on execution, review designs and code when useful, and capture repeatable patterns.
What we're looking for
6+ years of production engineering. You can own both architecture and implementation, and you've shipped systems that real customers depend on.
Hands-on AI agent / LLM application experience. You've shipped a non-trivial agentic system to production. That is, not a prototype, not a thin wrapper over a chat-completion API. We expect concrete examples: multi-turn agent loops with tool calling, retrieval and context pipelines you tuned against real failures, eval harnesses you built to catch regressions, or production memory and state systems for agents.
Working familiarity with the agent ecosystem: at least one of LangChain / LlamaIndex / model-provider SDKs, vector stores (pgvector, Pinecone, Weaviate), and eval tooling (Braintrust, LangSmith, custom harnesses).
Experience across diverse customer technology stacks and cloud platforms (AWS or GCP). Proficiency with Docker and networking fundamentals.
Fast debugging and strong operational instincts in complex, real-world environments.
Leadership through hands-on work; excellent communication for customer sessions and coaching junior engineers.
Tech stack: Python, TypeScript, AWS or GCP, Docker.
This role is probably NOT a fit if:
Your LLM experience is single-turn chat completions or RAG-as-a-feature.
You're an ML researcher or model trainer looking to move into agents — this role is for engineers already deep in agent production.
You haven't worked directly with customers on integration or delivery.
About Zep AI
Zep assembles the right context from chat history, business data, and user behavior so agents are personalized, accurate, and fast. Our open source project Graphiti hit 20k GitHub stars in under 12 months. Sub-200ms retrieval, SOC 2 Type 2/HIPAA certified, used by teams from startups to Fortune 500s.