































































Ready to apply? Let us help you stand out.
Apply with StandoutStaff Software Engineer - GenAI inference - Databricks Skip to main content Login Why Databricks Discover For Executives For Startups Lakehouse Architecture Databricks AI Research Customers Customer Stories Partners Partner Overview Explore the Databricks partner ecosystem Partner Spotlight Featured partner announcements Partner Program Explore benefits, tiers and how to become a partner Cloud Providers Databricks on AWS, Azure and GCP Find a Partner Discover Databricks partners for your needs Partner Solutions Find custom industry and migration solutions Product Databricks Platform Platform Overview A unified platform for data, analytics and AI Data Management Data reliability, security and performance Sharing An open, secure, zero-copy sharing for all data Data Warehousing Serverless data warehouse for SQL analytics Governance Unified governance for all data, analytics and AI assets Data Engineering ETL and orchestration for batch and streaming data Artificial Intelligence Build and deploy ML and GenAI applications Data Science Collaborative data science at scale Business Intelligence Intelligent analytics for real-world data Application Development Quickly build secure data and AI apps Database Postgres for data apps and AI agents Integrations and Data Marketplace Open marketplace for data, analytics and AI IDE Integrations Build on the Lakehouse in your favorite IDE Partner Connect Discover and integrate with the Databricks ecosystem Pricing Databricks Pricing Explore product pricing, DBUs and more Cost Calculator Estimate your compute costs on any cloud Open Source Open Source Technologies Learn more about the innovations behind the platform Solutions Databricks for Industries Communications Media and Entertainment Financial Services Public Sector Healthcare & Life Sciences Retail Manufacturing See All Industries Cross Industry Solutions AI Agents Cybersecurity Marketing Migration & Deployment Data Migration Professional Services Solution Accelerators Explore Accelerators Move faster toward outcomes that matter Resources Learning Training Discover curriculum tailored to your needs Databricks Academy Sign in to the Databricks learning platform Certification Gain recognition and differentiation Free Edition Learn professional Data and AI tools for free University Alliance Want to teach Databricks? See how. Events Data + AI Summit Data + AI World Tour AI Days Event Calendar Blog and Podcasts Databricks Blog Explore news, product announcements, and more Databricks AI Research Blog Discover the latest in our Gen AI research Data Brew Podcast Let’s talk data! Champions of Data + AI Podcast Insights from data leaders powering innovation Get Help Customer Support Documentation Community Dive Deep Resource Center Demo Center Architecture Center About Company Who We Are Our Team Databricks Ventures Contact Us Careers Working at Databricks Open Jobs Press Awards and Recognition Newsroom Security and Trust Security and Trust DATA + AI SUMMIT JUNE 15–18 | SAN FRANCISCO Don’t miss our biggest Summit yet. Save 50% with early-bird pricing. Register Ready to get started? Get a Demo DATA + AI SUMMIT JUNE 15–18 | SAN FRANCISCO Don’t miss our biggest Summit yet. Save 50% with early-bird pricing. Register Login Try Databricks Overview Culture Benefits Diversity Engineering Research Students & new grads Back to search results Staff Software Engineer - GenAI inference San Francisco, California Apply now P-1285 About This Role As a staff software engineer for GenAI inference, you will lead the architecture, development, and optimization of the inference engine that powers Databricks Foundation Model API.. You’ll bridge research advances and production demands, ensuring high throughput, low latency, and robust scaling. Your work will encompass the full GenAI inference stack: kernels, runtimes, orchestration, memory, and integration with frameworks and orchestration systems. What You Will Do Own and drive the architecture, design, and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference Partner closely with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine Lead the end-to-end optimization for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators Define and guide standards to build and maintain instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations Architect scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads Ensure reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning Collaborate cross-functionally on Integrating with federated, distributed inference infrastructure – orchestrate across nodes, balance load, handle communication overhead Drive cross-team collaboration: with platform engineers, cloud infrastructure, and security/compliance teams Represent the team externally through benchmarks, whitepapers, and open-source contributions What We Look For BS/MS/PhD in Computer Science, or a related field Strong software engineering background (6+ years or equivalent) in performance-critical systems Proven track record of owning complex system components and driving architectural decisions end-to-end Deep understanding of ML inference internals: attention, MLPs, recurrent modules, quantization, sparse operations, etc. Hands-on experience with CUDA, GPU programming, and key libraries (cuBLAS, cuDNN, NCCL, etc.) Strong background in distributed systems design, including RPC frameworks, queuing, RPC batching, sharding, memory partitioning Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel, memory, networking, scheduler) Experience building instrumentation, tracing, and profiling tools for ML models Ability to lead through influence - work closely with ML researchers, translate novel model ideas into production systems Excellent communication and leadership skills, with a proactive and ownership-driven mindset Bonus: published research or open-source contributions in ML systems, inference optimization, or model serving Pay Range Transparency Databricks is committed to fair and equitable compensation practices. The pay range(s) for this role is listed below and represents the expected salary range for non-commissionable roles or on-target earnings for commissionable roles. Actual compensation packages are based on several factors that are unique to each candidate, including but not limited to job-related skills, depth of experience, relevant certifications and training, and specific work location. Based on the factors above, Databricks anticipates utilizing the full width of the range. The total compensation package for this position may also include eligibility for annual performance bonus, equity, and the benefits listed above. For more information regarding which range your location is in visit our page here . Local Pay Range $190,900 — $232,800 USD About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter , LinkedIn and Facebook . Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks . Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Why Databricks Discover For Executives For Startups Lakehouse Architecture Databricks AI Research Customers Customer Stories Partners Partner Overview Partner Program Find a Partner Partner Spotlight Cloud Providers Partner Solutions Why Databricks Discover For Executives For Startups Lakehouse Architecture Databricks AI Research Customers Customer Stories Partners Partner Overview Partner Program Find a Partner Partner Spotlight Cloud Providers Partner Solutions Product Databricks Platform Platform Overview Sharing Governance Artificial Intelligence Business Intelligence Database Data Management Data Warehousing Data Engineering Data Science Application Development Pricing Pricing Overview Pricing Calculator Open Source Integrations and Data Marketplace IDE Integrations Partner Connect Product Databricks Platform Platform Overview Sharing Governance Artificial Intelligence Business Intelligence Database Data Management Data Warehousing Data Engineering Data Science Application Development Pricing Pricing Overview Pricing Calculator Open Source Integrations and Data Marketplace IDE Integrations Partner Connect Solutions Databricks For Industries Communications Financial Services Healthcare and Life Sciences Manufacturing Media and Entertainment Public Sector Retail View All Cross Industry Solutions Cybersecurity Marketing Data Migration Professional Services Solution Accelerators Solutions Databricks For Industries Communications Financial Services Healthcare and Life Sciences Manufacturing Media and Entertainment Public Sector Retail View All Cross Industry Solutions Cybersecurity Marketing Data Migration Professional Services Solution Accelerators Resources Documentation Customer Support Community Learning Training Certification Free Edition University Alliance Databricks Academy Login Events Data + AI Summit Data + AI World Tour AI Days Event Calendar Blog and Podcasts Databricks Blog Databricks AI Research Blog Data Brew Podcast Champions of Data & AI Podcast Resources Documentation Customer Support Community Learning Training Certification Free Edition University Alliance Databricks Academy Login Events Data + AI Summit Data + AI World Tour AI Days Event Calendar Blog and Podcasts Databricks Blog Databricks AI Research Blog Data Brew Podcast Champions of Data & AI Podcast About Company Who We Are Our Team Databricks Ventures Contact Us Careers Open Jobs Working at Databricks Press Awards and Recognition Newsroom Security and Trust About Company Who We Are Our Team Databricks Ventures Contact Us Careers Open Jobs Working at Databricks Press Awards and Recognition Newsroom Security and Trust Databricks Inc. 160 Spear Street, 15th Floor San Francisco, CA 94105 1-866-330-0121 See Careers at Databricks © Databricks 2026 . All rights reserved. Apache, Apache Spark, Spark, the Spark Logo, Apache Iceberg, Iceberg, and the Apache Iceberg logo are trademarks of the Apache Software Foundation . Privacy Notice | Terms of Use | Modern Slavery Statement | California Privacy | Your Privacy Choices
Salary
$190,900 - $232,800
Location
San Francisco, California
Experience
6+ years
Ali Ghodsi
CEO