AuroraGPT

Large Scale Training on Diverse Accelerators

Sam Foreman

[email protected]

Argonne National Laboratory

Scalable Deep Learning

@ SIAM Annual Meeting 2025

2025-05-21

🎯 AuroraGPT: Goals

AuroraGPT: General purpose scientific LLM Broadly trained on a general corpora plus scientific {papers, texts, data}

  • Explore pathways towards a “Scientific Assistant” model
  • Build with international partners (RIKEN, BSC, others)
  • Multilingual English, 日本語, French, German, Spanish
  • Multimodal: images, tables, equations, proofs, time series, graphs, fields, sequences, etc
Figure 1: Image from Hannibal046 / Awesome-LLM
  • Here to talk about AuroraGPT, Argonne’s internal effort to build a general purpose scientific LLM, broadly trained on a general corpora of text + scientific {papers, text, data}

  • As part of this effort, we plan to…

    • Explore pathways, build with international partners, multi-{lingual, modal}
  • Rough timeline of the project and deliverables:

    • 202{3,4}: text-only models, plan to release a series of {7B, 70B, 1T} models
    • 202{4,5}: Basic multi-modal models
    • 202{5,6}: Advanced scientific multimodal models
  • AuroraGPT: Exascale Pre-Training of Large Language Models on Diverse Accelerators > argonne-lcf/Megatron-DeepSpeed > Large Model Training: any scale, any accelerator

  • Thoughts:

    • yeah okay so I’ll probably try and include then like:
  • Goals

  • Issues with existing models

  • AuroraGPT

    • Project Details
    • Teams, Ongoing Efforts
    • Scientific Evaluations
  • Scaling Results

    • MProt-DPO
    • aeris (??)

🧪 AuroraGPT: Open Science Foundation Model

Figure 2: High-level overview of AuroraGPT project
  • AuroraGPT will be a publicly distributed, open source foundation model for open science
  • Is being trained on:
    • Scientific / engineering structured data
    • General text, media, news, etc.
    • Large amounts of low to medium quality data
    • Much less high quality data (that is publicly available for use)
  • This data is then cleaned, processed, de-duplicated and used for the initial pre-training phase of the model
  • The vast majority of the overall compute is spent during this initial pre-training phase
    • This is the group I help to lead and will be talking a bit about today
  • The initial pre-training phase is currently underway
    • Eventually, given a bit of time, effort and magic, the model will be ready for fine-tuning and additional training for a variety of downstream tasks
  • The pretrained model will then be handed off for additional fine-tuning on a variety of downstream tasks
    • Scientific discovery
    • Accelerate scientific tasks
    • Digital twins
    • Inverse design
    • Code optimization
    • Accelerated simulations
    • Autonomous experiments
    • Co-design
  • Becoming increasingly clear that LLMs have the potential to drastically accelerate computational science
    • We’ve seen this already for {GenSLMs, Weather / Climate / Earth Systems Modeling, Particle Physics, etc.}

🧰 AuroraGPT: Toolbox

  • Datasets and data pipelines (how do we deal with scientific data?)
  • Software infrastructure and workflows (scalable, robust, extensible)
  • Evaluation of state-of-the-art LLM Models (how do they perform on scientific tasks?)

🚂 Training

argonne-lcf/Megatron-DeepSpeed
Large Model Training: Any Scale, Any Acclerator

🏃‍♂️ Running

argonne-lcf/inference-endpoints
Inference endpoints for LLMs, hosted @ ALCF

🌌 Aurora

Table 1: Aurora Specs
Racks 166
Nodes 10,624
CPUs 21,248
GPUs 63,744
NICs 84,992
HBM 8 PB
DDR5c 10 PB
Figure 3: Aurora1: Fact Sheet.
  1. 🏆 Aurora Supercomputer Ranks Fastest for AI

🤝 Teams

  • Planning
  • Data
    • Aggregate existing data and generate new (synthetic) data
  • Models / Training1
    • Pre-train a series of models on publicly available data
  • Post-Training
    • Fine-tuning, alignment, reinforcement learning
  • Evaluation
    • Skills, trustworthiness, safety, robustness, privacy, machine ethics
  • Inference
    • Model serving, API development / public-facing web services
  • Distribution
    • Licensing, generating and distributing artifacts for public consumption
  • Communication

generating curating / aggregating cleaning / understanding new data for training including: MCQ’s + scientific narratives new scientific data modalities (gene sequences, geospatial data, …)

  1. Sam Foreman (co-lead), Varuni Sastry, Marieme Ngom, …

🍎 Training LLMs

  • Want to minimize cost of training
    • Maximize throughput (?)
      • Data parallelism takes us only so far (McCandlish et al. 2018)…
  • Possible directions:
    • Large batch training (?)
      • new (second order?) optimizers
    • Better tokenization schemes (no tokenizers ?)
      • Better data (?)
    • Alternative architecture(s) (?)
      • Diffusion / flow-matching
      • Sub-quadratic attention (state space models, …)

argonne-lcf/Megatron-DeepSpeed

🎯 Goals

We need our implementation1 to be:

  • 💯 Correct
    • Consistent across systems
    • Requires being able to run the same code on multiple different machines
    • Independent of hardware and communication library (e.g. CUDA, ROCm, XPU, CPU, MPS, …)
  • 🚀 Scalable
    • Performant across thousands of GPUs
    • Highly configurable and extensible
    • Parallelizable across (tensor, pipeline, sequence) dimension(s)
    • Robust against {hardware, network, filesystem, transient} failures2
  1. argonne-lcf/Megatron-DeepSpeed

  2. Very much a WIP

🏋️ Challenges: In Practice

This is incredibly difficult in practice, due in part to:

  • Brand new {hardware, architecture, software}
  • Lack of native support in existing frameworks (though getting better!)
  • General system stability
    +10k Nodes (×12  XPU1  Node)⇒\left(\times \frac{12\,\,\mathrm{XPU}}{1\,\,\mathrm{Node}}\right)\Rightarrow(×1Node12XPU​)⇒ +100k XPUs
    • network performance
    • file system stability (impacted by other users !)
    • many unexpected difficulties occur at increasingly large scales
  • Combinatorial explosion of possible configurations and experiments
    • {hyperparameters, architectures, tokenizers, learning rates, …}

💾 Training: 2T Tokens

  • To train a fixed model on trillions of tokens requires:
    1. Aggregating data from multiple different corpora
      (e.g. ArXiv, Reddit, StackExchange, GitHub, Wikipedia, etc.)
    2. Sampling each training batch according to a fixed distribution across corpora
    3. Building indices that map batches of tokens into these files (indexing)

    The original implementation was slow:

    • Designed to run serially on a single device
    • Major bottleneck when debugging data pipeline at scale

🍹 Blending Data, Efficiently

  • 🐢 Original implementation:
    • Slow (serial, single device)
    • ~ 1 hr/2T tokens
  • 🐇 New implementation:
    • Fast! (distributed, asynchronous)
    • ~ 2 min/2T tokens
      (30x faster !!)
Figure 4: Time spent preparing 2T tokens

📉 Loss Curve: Training AuroraGPT-7B on 2T Tokens

Figure 5: Loss curve during training on 2T tokens.

✨ Features

  • 🕸️ Parallelism:
    • {data, tensor, pipeline, sequence, …}
  • ♻️ Checkpoint Converters:
    • Megatron ⇄ 🤗 HF ⇄ ZeRO ⇄ Universal
  • 🔀 DeepSpeed Integration:
    • ZeRO Offloading
    • Activation checkpointing
    • AutoTP (WIP)
    • ability to leverage features from DeepSpeed community

✨ Features (even more!)

  • 🧗 Optimizers1:
    • Support for many different optimizers:
      • Distributed Shampoo, Muon, Adopt, Sophia, Lamb, GaLORE, ScheduleFree, …
    • See full list
    • Large batch training
  • 📊 Experiment Tracking:
    • Automatic experiment and metric tracking with Weights & Biases
  1. Implemented by Marieme Ngom

🔭 LLMs for Science
source (@tenderizzation)
ChatGPT: explain this image

🤔 Evaluating Models on Scientific Applications

  • What to measure?
    • Knowledge Extraction, Retrieval, Distillation, Synthesis: LLM is provided a question or instruction and a truthful answer is expected
    • Text Grounded: Answers are expected to be fully grounded on peer-reviewed references to support responses
    • Reasoning: LLMs are expected to solve deductive (prove a theory or hypothesis from formal logic and observations), inductive (validate / explain observations from theories) problems
    • Creativity: A creative answer is expected from a question or instruction
      • thoughtful dialogue, coding, etc.

⚖️ Evaluating FM Skills for Science: Criteria

  • Criteria for all of the above:
    • Correctness of facts
    • Accuracy of solutions and inferences
    • Reliability consistently good in quality or performance
    • Speed how fast to produce a response
    • # shots how many examples are needed for good quality
      • Extent of prompt engineering

🧬 MProt-DPO: Scaling Results

Figure 6: Scaling results for 3.5B model across ~38,400 GPUs
  • ~ 4 EFLOPS @ Aurora

  • 38,400 XPUs
    = 3200 [node] x 12 [XPU / node]

  • 🔔 Gordon Bell Finalist1:

    • MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design Workflows
  1. (Dharuman et al. 2024)

🧬 MProt-DPO: Scaling Results

Figure 7: 3.5B model
Figure 8: 7B model

🚂 Loooooooooong Sequence Lengths

  • Working with Microsoft/DeepSpeed team to enable longer sequence lengths (context windows) for LLMs
    • See my blog post for additional details

25B

33B
Figure 9: Maximum (achievable) SEQ_LEN for both 25B and 33B models (See: Song et al. (2023))

scaling4science
Megatron-DS-Benchmarking

📓 References

  • argonne-lcf / Megatron-DeepSpeed
    For the largest of large language models.
  • saforem2 / ezpz
    Distributed training, ezpz. 🍋
  • 📊 See my other slides at samforeman.me/talks:
    • LLMs from Scratch
    • Creating Small(~ish) LLMs
    • Parallel Training Techniques
    • LLMs on Polaris
    • Training LLMs at Scale
  • 👀 See also:
    • New international consortium for generative AI models for science
    • PyTorch Distributed Overview
    • 🤗 Efficient Training on Multiple GPUs
    • Getting Started - DeepSpeed
    • 🕸️ Quality Measures for Dynamic Graph Generative Models
      (Hosseini et al. 2025)

❤️ Thank you!

  • Organizers

  • Feel free to reach out!

🙏 Acknowledgements

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

📑 Bibliography

  • Refs:
    • Wei et al. (2022)
    • Animations from The Illustrated Transformer
Dharuman, Gautham, Kyle Hippe, Alexander Brace, Sam Foreman, Väinö Hatanpää, Varuni K. Sastry, Huihuo Zheng, et al. 2024. “MProt-DPO: Breaking the ExaFLOPS Barrier for Multimodal Protein Design Workflows with Direct Preference Optimization.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis. SC ’24. Atlanta, GA, USA: IEEE Press. https://doi.org/10.1109/SC41406.2024.00013.
Hosseini, Ryien, Filippo Simini, Venkatram Vishwanath, Rebecca Willett, and Henry Hoffmann. 2025. “Quality Measures for Dynamic Graph Generative Models.” In The Thirteenth International Conference on Learning Representations. https://openreview.net/forum?id=8bjspmAMBk.
McCandlish, Sam, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. 2018. “An Empirical Model of Large-Batch Training.” https://arxiv.org/abs/1812.06162.
Song, Shuaiwen Leon, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, et al. 2023. “DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery Through Sophisticated AI System Technologies.” https://arxiv.org/abs/2310.04610.
Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. 2022. “Emergent Abilities of Large Language Models.” https://arxiv.org/abs/2206.07682.

samforeman.me/talks/AuroraGPT-SIAM25/slides

1
AuroraGPT Large Scale Training on Diverse Accelerators Sam Foreman [email protected] Argonne National Laboratory Scalable Deep Learning @ SIAM Annual Meeting 2025 2025-05-21

  1. Slides

  2. Tools

  3. Close
  • AuroraGPT
  • 🎯 AuroraGPT: Goals
  • 🧪 AuroraGPT: Open Science Foundation Model
  • 🧰 AuroraGPT: Toolbox
  • 🌌 Aurora
  • 🤝 Teams
  • 🍎 Training LLMs...
  • 🎯 Goals
  • 🏋️ Challenges: In Practice
  • 💾 Training: 2T Tokens
  • 🍹 Blending Data, Efficiently
  • 📉 Loss Curve: Training AuroraGPT-7B on 2T Tokens
  • ✨ Features
  • ✨ Features (even more!)
  • 🔭 LLMs for Science...
  • 🤔 Evaluating Models on Scientific Applications
  • ⚖️ Evaluating FM Skills for Science: Criteria
  • 🧬 MProt-DPO: Scaling Results
  • 🧬 MProt-DPO: Scaling Results
  • 🚂 Loooooooooong Sequence Lengths
  • 📓 References
  • ❤️ Thank you!
  • 📑 Bibliography
  • f Fullscreen
  • s Speaker View
  • o Slide Overview
  • e PDF Export Mode
  • r Scroll View Mode
  • b Toggle Chalkboard
  • c Toggle Notes Canvas
  • d Download Drawings
  • ? Keyboard Help