Researcher, Post Training

4 days ago


San Francisco, California, United States Cartesia Full time
About Cartesia

Our mission is to build the next generation of AI: ubiquitous, interactive intelligence that runs wherever you are. Today, not even the best models can continuously process and reason over a year-long stream of audio, video and text—1B text tokens, 10B audio tokens and 1T video tokens—let alone do this on-device.

We're pioneering the model architectures that will make this possible. Our founding team met as PhDs at the Stanford AI Lab, where we invented State Space Models or SSMs, a new primitive for training efficient, large-scale foundation models. Our team combines deep expertise in model innovation and systems engineering paired with a design-minded product engineering team to build and ship cutting edge models and experiences.

We're funded by leading investors at Index Ventures and Lightspeed Venture Partners, along with Factory, Conviction, A Star, General Catalyst, SV Angel, Databricks and others. We're fortunate to have the support of many amazing advisors, and 90+ angels across many industries, including the world's foremost experts in AI.

About the Role

The next leap in model intelligence won't come from scale alone — it will come from better post-training and alignment. Cartesia's Post-Training team is developing the methods and systems that make multimodal models truly adaptive, aligned, and grounded in human intent.

As a Researcher on the Post-Training team, you'll work at the intersection of machine learning research, alignment, and infrastructure, designing new techniques for preference optimization, model evaluation, and feedback-driven learning. You'll explore how feedback signals can guide models to reason more effectively across modalities, and you'll build the infrastructure to measure and improve these behaviors at scale.

Your work will directly shape how Cartesia's foundation models learn, improve, and ultimately connect with people.

Your Impact

  • Own research initiatives to improve the alignment and capabilities of multimodal models

  • Develop new post-training methods and evaluation frameworks to measure model improvement

  • Partner closely with research, product, and platform teams to define best practices for creating specialized models

  • Implement, debug, and scale experimental systems to ensure reliability and reproducibility across training runs

  • Translate research findings into production-ready systems that enhance model reasoning, consistency, and human alignment

What You Bring

  • Deep knowledge of preference optimization and alignment methods, including RLHF and related approaches

  • Experience designing evaluations and metrics for generative or multimodal models

  • Strong engineering and debugging skills, with experience building or scaling complex ML systems

  • Ability to trace and diagnose complex behaviors in model performance across the training and evaluation pipeline

What Sets You Apart

  • Experience with multimodal model training (e.g., text, audio, or vision-language models)

  • Contributions to alignment research or open-source projects related to model evaluation or fine-tuning

  • Background in designing or implementing human-in-the-loop evaluation systems

Our culture

We're an in-person team based out of San Francisco. We love being in the office, hanging out together and learning from each other everyday.

We ship fast. All of our work is novel and cutting edge, and execution speed is paramount. We have a high bar, and we don't sacrifice quality and design along the way.

We support each other. We have an open and inclusive culture that's focused on giving everyone the resources they need to succeed.



  • San Francisco, California, United States Thinking Machines Lab Full time $350,000 - $475,000

    Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. We are scientists, engineers, and builders who've created some of the most widely used AI products, including ChatGPT and ,...


  • San Francisco, California, United States Scale AI Full time $180,600 - $315,000

    AI is becoming vitally important in every function of our society. At Scale, our mission is to accelerate the development of AI applications. For 9 years, Scale has been the leading AI data foundry, helping fuel the most exciting advancements in AI, including generative AI, defense applications, and autonomous vehicles. With our recent investment from Meta,...


  • San Francisco, California, United States Thinking Machines Lab Full time $350,000 - $475,000

    Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. We are scientists, engineers, and builders who've created some of the most widely used AI products, including ChatGPT and ,...


  • San Francisco, California, United States Thinking Machines Lab Full time $350,000 - $475,000

    Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. We are scientists, engineers, and builders who've created some of the most widely used AI products, including ChatGPT and ,...

  • AI Researcher

    2 days ago


    San Francisco, California, United States Perplexity Full time $200,000 - $300,000

    Perplexity is seeking top-tier AI Research Scientists and Engineers to advance our AI products and capabilities. We're building the future of AI-powered search and agent experiences through our Sonar models, Deep Research Agent, Comet Agent, and Search products. Join us in creating SOTA experiences that handle hundreds of millions of queries and continue to...

  • Research Engineer

    1 week ago


    San Francisco, California, United States Seer Full time

    Research Engineer – Large Language Models (Foundational Model from Scratch)Bay Area, CAA deep-tech startup on a mission to build a foundational AI model from scratch. Our team combines world-class research, cutting-edge engineering, and a bold vision: to create AI systems that are causal, interpretable, and scalable. Backed by top-tier investors and...

  • Research Engineer

    1 day ago


    San Francisco, California, United States Acceler8 Talent Full time

    Research Engineer - San Francisco, CAA company building frontier-scale AI models that automate software engineering and AI research, combining ultra-long context, domain-specific RL, and massive compute infrastructure are looking for a Research Engineer to join their team.What Will I Be Doing:Optimize inference throughput for novel model...

  • Research Engineer

    3 days ago


    San Francisco, California, United States Applied Compute Full time

    Applied Compute builds Specific Intelligence for enterprises, unlocking the knowledge inside a company to train custom models and deploy an in-house agent workforce.Today's state-of-the-art AI isn't one-size-fits-all—it's a tailored system that continuously learns from a company's processes, data, expertise, and goals. The same way companies compete today...


  • San Francisco, California, United States Benchstack Ai Full time

    Machine Learning Research EngineerLocation:San Francisco (on-site)Compensation:$200K–$350K + equityVisa sponsorship availableA stealth AI startup is building technology that teaches models whatgreatfeels like — across writing, design, and creative expression.They're hiring aMachine Learning Research Engineerto design and run experiments that shape how...

  • Applied Research

    1 week ago


    San Francisco, California, United States Prime Intellect Full time

    Building Open Superintelligence InfrastructurePrime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full RL post-training stack: environments, secure sandboxes,...