Liva AI is a data company building the multimodal datasets that make AI feel truly human. We’re building a factory of consumer-facing software designed to capture naturally occurring voice and video data at scale. We believe multimodal models will become the primary interface for human-computer interaction, yet a massive and rapidly growing data gap stands in the way.
We’ve raised a $3M seed round (backed by YC, Amino Capital, CRV, angels from OpenAI and Meta, and more). We’re working with leading AI labs and voice-agent companies and have sold a wide variety of high-demand datasets.
As a member of our engineering team, you’ll own end-to-end systems for collecting, validating, and quality-assuring frontier multimodal data. This includes building data collection products and annotator workflows, as well as scaling the infrastructure and evaluation pipelines.
WHAT YOU’LL DO:
REQUIRED SKILLS:
BENEFITS:
Speech models trained on internet data still lack realistic results. We solve this by collecting targeted training data for model labs. We hope to create a world where AI feels more human.
Salary
$140,000 - $250,000
Equity
0.5% - 1.48%
Location
San Francisco, CA, US
Total raised
$3.0M
Last stage
Seed
Investors
No applications, no recruiter spam. Just the intro.
A few questions to make sure this role is the right shape for you. Two minutes.
I write the intro, send it to the founder, and handle the back-and-forth.
If they’re a yes, I book the chat. You show up — that’s the whole job-hunt.