Analyze any footage, without training a model
OnDeck is the infrastructure layer that makes Vision Language Models accessible and scalable for enterprise. We let organizations instantly find any object, behavior or event, in any footage, without needing to train a model or collect any training data.
The Pain: Creating vision models usually takes months: collecting training data, training, then deployment. Worse yet:
To overcome these blockers, we bet early on the power of VLMs and built a vision engine that can generalize across any task and doesn’t need any training data. We published a NeurIPS workshop paper showing our new methods with VLMs beat traditional CV even at niche tasks.
Our current customers include:
Total raised
$1.5M
Last stage
Seed
Investors
Sepand Dyanatkar
Solving vision. Masters @ Cambridge in multi-agent RL. Research in swarm robotics for space, and built software at Amazon and the European Space Agency.
Alexander Dungate
Solving vision @ OnDeck. BSc in CS + biology, National Geographic Explorer.
LinkedInSepand Dyanatkar
Solving vision. Masters @ Cambridge in multi-agent RL. Research in swarm robotics for space, and built software at Amazon and the European Space Agency.
LinkedInNo applications, no recruiter spam. Just the intro.
A few questions to make sure this role is the right shape for you. Two minutes.
I write the intro, send it to the founder, and handle the back-and-forth.
If they’re a yes, I book the chat. You show up — that’s the whole job-hunt.