AI Researcher, Post Training job opportunity at Lovable.



Date bot
Lovable AI Researcher, Post Training
Experience: General
Pattern: FullTime
apply Apply Now
Salary:
Status:

Post Training

Copy Link Report
degreeOND
loacation Stockholm, Sweden
loacation Stockholm....Sweden
Auto GPT Summarize Enabled

TL;DR: Lovable lets over 2 million people build software using plain language, and the models behind it need to be exceptional. We're hiring an engineer who has gotten their hands dirty with post-training at scale and wants to do it again for one of the fastest-growing AI products in the world.You'll own our full post-training pipeline: translating the latest research into production training recipes, adapting them for code generation and agent workloads, and putting improved models in front of users fast. The goal is to get promising research into production within days or weeks, not months. This isn't an academic research position - you'll spend as much time in production infrastructure as in training configs, and your success is measured by what ships.Why Lovable?Lovable lets anyone and everyone build software with any language. From solopreneurs to Fortune 100 teams, millions of people use Lovable to transform raw ideas into real products - fast. We are at the forefront of a foundational shift in software creation, which means you have an unprecedented opportunity to change the way the digital world works. Over 2 million people in 200+ countries already use Lovable to launch businesses, automate work, and bring their ideas to life. And we’re just getting started.We’re a small, talent-dense team building a generation-defining company from Stockholm. We value extreme ownership, high velocity, and low-ego collaboration. We seek out people who care deeply, ship fast, and are eager to make a dent in the world.What we’re looking forYou've personally run post-training jobs on large language models - RFT/RLVR, preference optimization, or similar. Not just called APIs or written prompts, but actually trained and iterated on modelsYou can write solid production code. The systems you build need to run reliably, not just produce interesting research artifactsYou're fluent in at least one major ML framework (PyTorch, JAX) and comfortable working with distributed training setups and GPU clustersYou understand the math behind preference optimization, reward modeling, and alignment techniques - and can reason about when each approach fitsYou've built or significantly contributed to evaluation systems that capture real-world quality, not just benchmark scoresYou can trace a model quality regression from user-facing symptoms back through serving, inference, and training - and you enjoy doing itYou want to ship. Research taste matters, but at Lovable the question is always "how fast can we get this to users?"Preferred: You've worked on code generation or agentic use cases specificallyYou've put post-trained models into the hands of real users and seen how they hold up at scaleYou've owned the full loop: curating data, running training, evaluating results, deploying, and monitoring in productionYou have a habit of reading a paper on Monday and having a prototype running by FridayYou've experimented with speculative decoding or similar techniques to improve model efficiencyYou have strong views on evaluation methodology and have built evals that actually predict user satisfactionYou've published or contributed meaningfully to the open-source ML ecosystemWhat you’ll doOwn the full lifecycle of Lovable's post-training pipeline - from data curation and training runs through evaluation and deploymentApply and adapt reinforcement learning, preference optimization, and supervised fine-tuning methods to make our models better at generating code, reasoning about user intent, and acting as reliable agentsBuild the evaluation and experimentation infrastructure that tells us whether a model change actually helps users - covering helpfulness, safety, latency, and reliabilityDevelop and operate the production systems that run training jobs at scale, including GPU orchestration and data pipelinesWork across team boundaries with our agent, product, and infrastructure engineers to turn model gains into product improvements users can feelInvestigate and resolve failures end-to-end - whether the root cause is in a training recipe, a data issue, or a serving regressionRead papers, run experiments, and move fast: the goal is to get promising research into production within days or weeks, not monthsAbout your applicationPlease submit your application in English. It’s our company language, so you’ll be speaking lots of it if you join.We treat all candidates equally - if you’re interested, please apply through our careers portal.

Other Ai Matches

Account Executive Applicants are expected to have a solid experience in handling Job related tasks
Engineer - Agents & Evals Applicants are expected to have a solid experience in handling Job related tasks
Channel Manager, Agencies Applicants are expected to have a solid experience in handling Agencies related tasks
Design Engineer (Web & Brand) Applicants are expected to have a solid experience in handling Job related tasks