poolside is hiring for a

Member of Engineering, Pre-training

Full-time, indefinite contract
Remote EMEA/East Coast or Paris
Apply for this role

About poolside

poolside is pursuing AGI. Only very few companies in the world are able to be in this race - we’re early in our journey which makes it a great time to join the team but we also have scale-up+ needs when it comes to product and engineering. We can tell you why over the phone...

About our team

We are a remote-first team that sits across Europe and North America. We come together once a month in-person for 3 days, always Monday-Wednesday. We also do longer off-sites twice a year.

Our team is a combination of more research and more engineering-oriented profiles, however, everyone deeply cares about the quality of the systems we build and has a strong underlying knowledge of software development. We believe that good engineering leads to faster development iterations, which allows us to compound our efforts.

About the role

You would be working on our pre-training team focused on building out our distributed training of Large Language Models and major architecture changes. This is a hands-on role where you'll be both programming and implementing LLM architectures (dense & sparse) and distributed training code all the way from data to tensor parallelism, while researching potential optimizations (from basic operations to communication) and new architectures & distributed training strategies. You will have access to thousands of GPUs in this team.

Your mission

To train the best foundational models for source code generation in the world in minimum time and with maximum hardware utilization.


  • Follow the latest research on LLMs and source code generation. Propose and evaluate innovations, both in the quality and the efficiency of the training.
  • Do LLM-Ops: babysitting and analyzing the experiments, iterating.
  • Write high-quality Python, Cython, C/C++, Triton, CUDA code.
  • Work in the team: plan future steps, discuss, and always stay in touch.

Skills & Experience

  • Experience with Large Language Models (LLM)
    • Deep knowledge of Transformers is a must
    • Knowledge/Experience with cutting-edge training tricks
    • Knowledge/Experience of distributed training
    • Trained LLMs from scratch
    • Coded LLMs from scratch
    • Knowledge of deep learning fundamentals
  • Strong machine learning and engineering background
  • Research experience
    • Author of scientific papers on any of the topics: applied deep learning, LLMs, source code generation, etc. - is a nice to have
    • Can freely discuss the latest papers and descend to fine details
    • Is reasonably opinionated
  • Programming experience
    • Linux
    • Strong algorithmic skills
    • Python with PyTorch or Jax
    • C/C++, CUDA, Triton
    • Use modern tools and are always looking to improve
    • Strong critical thinking and ability to question code quality policies when applicable
    • Prior experience in non-ML programming, especially not in Python - is a nice to have


  • Intro call with one of our Founding Engineers
  • Technical Interview(s) with one of our Founding Engineers
  • Team-fit call with Beatriz, our Head of People
  • Meet & greet call with Eiso, our CTO & Co-Founder


  • Fully remote work & flexible hours;
  • 37 days/year of vacation & holidays;
  • Health insurance allowance for you and dependents;
  • Company-provided equipment;
  • Wellbeing, always-be-learning and home office allowances;
  • Frequent team get togethers in Paris;
  • Great diverse & inclusive people-first culture.
Full-time, indefinite contract
Remote EMEA/East Coast or Paris
Apply for this role