Mirage is an AI-native video platform that intelligently orchestrates production and editing through natural language. Our models leverage contextual awareness to execute the same creative decisions a professional editor would — dramatically improving productivity for experienced teams, while making video creation accessible to anyone. We’re an interdisciplinary team addressing some of the most difficult technical and creative challenges in generative media. As an early member of our team, you’ll tackle foundational problems that remain largely unsolved across the industry, driving an outsized impact on the future of creative expression. More
about us Product (Captions by Mirage) Research (Seeing Voices, technical-white-paper) Updates (Mirage on X / twitter) TechCrunch , Forbes AI 50 , Fast Company (press) Our Investors We’re very fortunate to have some the best investors and entrepreneurs backing us, including Index Ventures, Kleiner Perkins, Sequoia Capital, Andreessen Horowitz, General Catalyst , Uncommon Projects, Kevin Systrom, Mike Krieger, Lenny Rachitsky, Antoine Martin, Julie Zhuo, Ben Rubin, Jaren Glover, SVAngel, 20VC, Ludlow Ventures, Chapter One, and more. Please note that all of our roles will require you to be in-person at our NYC HQ (located in Union Square)
About
the role Mirage is seeking a Research Scientist to advance the frontier of multimodal video generation. You’ll work on novel modeling approaches, training objectives, and scaling strategies for large-scale video models, contributing directly to systems used by millions of creators. You’ll focus on pushing generation quality, controllability, and realism—especially in facial expression, audio-to-video synchronization, human motion, expression, and storytelling—while validating ideas through real-world product impact.
Responsibilities Develop novel approaches to video and multimodal generative modeling Design new training objectives, loss functions, and evaluation methods optimizing for highly compute-efficient, low-latency generation Explore temporal modeling, controllability, and multimodal alignment Conduct empirical studies to understand scaling behavior and model performance Drive rapid experimentation across architectures and training strategies Analyze model behavior and identify opportunities for improvement Translate research insights into measurable product improvements What makes you a great fit MS/PhD in ML, CS, or related field Strong publication record (NeurIPS, ICML, ICLR, etc. or equivalent work) Deep expertise in generative modeling (diffusion, autoregressive architectures, etc.) Deep understanding of transformers and modern multimodal systems Experience with large-scale training and empirical research, and optimizing models for real-time inference efficiency Strong experience working with audio representations and audio-visual datasets
Benefits: Comprehensive medical, dental, and vision plans 401K with employer match Commuter
Benefits Catered lunch multiple days per week Dinner stipend every night if you're working late and want a bite! Grubhub subscription Health & Wellness
Perks Multiple team offsites per year with team events every month Generous PTO policy Captions provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Please note
benefits apply to full time employees only.
Salary
$175,000 - $275,000
Location
New York, NY, USA
Total raised
$175.0M
Last stage
Growth
Investors
No applications, no recruiter spam. Just the intro.
A few questions to make sure this role is the right shape for you. Two minutes.
I write the intro, send it to the founder, and handle the back-and-forth.
If they’re a yes, I book the chat. You show up — that’s the whole job-hunt.