AuroraGPT

Author
Affiliation
Published

October 30, 2024

๐ŸŽฏ AuroraGPT Goals

AuroraGPT: General purpose scientific LLM
Broadly trained on a general corpora plus scientific {papers, texts, data}

  • Explore pathways towards a โ€œScientific Assistantโ€ model
  • Build with international partners (RIKEN, BSC, others)
  • Multilingual English, ๆ—ฅๆœฌ่ชž, French, German, Spanish
  • Multimodal: images, tables, equations, proofs, time series, graphs, fields, sequences, etc
Figure 1: Image from Hannibal046/Awesome-LLM
Figure 2: Credit to the entire AuroraGPT team for slides.
  • Here to talk about AuroraGPT, Argonneโ€™s internal effort to build a general purpose scientific LLM, broadly trained on a general corpora of text + scientific {papers, text, data}

  • As part of this effort, we plan toโ€ฆ

    • Explore pathways, build with international partners, multi-{lingual, modal}
  • Rough timeline of the project and deliverables:

    • 202{3,4}: text-only models, plan to release a series of {7B, 70B, 1T} models
    • 202{4,5}: Basic multi-modal models
    • 202{5,6}: Advanced scientific multimodal models

๐Ÿงช AuroraGPT: Open Science Foundation Models

  • AuroraGPT will be a publicly distributed, open source foundation model for open science
  • Is being trained on:
    • Scientific / engineering structured data
    • General text, media, news, etc.
    • Large amounts of low to medium quality data
    • Much less high quality data (that is publicly available for use)
  • This data is then cleaned, processed, de-duplicated and used for the initial pre-training phase of the model
  • The vast majority of the overall compute is spent during this initial pre-training phase
    • This is the group I help to lead and will be talking a bit about today
  • The initial pre-training phase is currently underway
    • Eventually, given a bit of time, effort and magic, the model will be ready for fine-tuning and additional training for a variety of downstream tasks
  • The pretrained model will then be handed off for additional fine-tuning on a variety of downstream tasks
    • Scientific discovery
    • Accelerate scientific tasks
    • Digital twins
    • Inverse design
    • Code optimization
    • Accelerated simulations
    • Autonomous experiments
    • Co-design
  • Becoming increasingly clear that LLMs have the potential to drastically accelerate computational science
    • Weโ€™ve seen this already for {GenSLMs, Weather / Climate / Earth Systems Modeling, Particle Physics, etc.}

๐Ÿ“Š AuroraGPT Outcomes

  • Datasets and data pipelines for preparing science training data
  • Software infrastructure and workflows to train, evaluate and deploy LLMs at scale for scientific resarch purposes
  • Evaluation of state-of-the-art LLM Models to determine where they fall short in deep scientific tasks and where deep data may have an impact
  • Assessment of the approach of augmenting web training data with two forms of data specific to science
    • Full text scientific papers
    • Structured scientific datasets (suitably mapped to narrative form)
  • Research grade artifacts (models) for scientific community for adaptation for downstream uses
  • Promotion of responsible AI best practices where we can figure them out
  • International Collaborations around the long term goal of AGI for science
  • Deliverables:

    • datasets, pipelines
    • software infrastructure, workflows to interface with science applications
    • checkpoints, models, logs, workbook, insights, etc.
  • Hope to understand:

    • How different state-of-the-art models perform at different scientific tasks
    • where deep data may have an impact
    • feasibility of generically augmenting text with scientific structured data
  • Huge undertaking that will require large international collaborations around long term goal of AGI for science

  • Extra points:

    • Well known that LLMs are good for non-consequential tasks
    • Known to โ€œhallucinateโ€ and create false information
    • Can this be mitigated reliably ??

๐ŸŒŒ Aurora

Table 1: Aurora Specs
Racks 166
Nodes 10,624
CPUs 21,248
GPUs 63,744
NICs 84,992
HBM 8 PB
DDR5c 10 PB
Figure 4: Aurora Fact Sheet

๐Ÿค– ALCF AI Testbed

  • ALCF AI Testbed Systems are in production and available for allocations to the research community
  • Significant improvement in time-to-solution and energy-efficiency for diverse AI for science applications.
  • NAIRR Pilot

Up to 25\times improvement for genomic foundation models with 6.5\times energy efficiency

Figure 5: SambaNova SN-30: 2nd Gen, 8 nodes with 64 AI Accelerators
Figure 6: Graphcore Bow: generation accelerators: Pod-64 configuration with 64 accelerators
Figure 7: Cerebras: 2x CS-2 WSE with Memory-X and Swarm-X technologies
Figure 8: GroqRack: 9 nodes, 8 GroqChip v1.5 Tensor streaming processors accelerators per node

๐Ÿ‘ฅ Team Leads

Planning

Rick Stevens

Rick Stevens1

Ian Foster

Ian Foster

Rinku Gupta

Rinku Gupta

Mike Papka

Mike Papka

Arvind Ramanathan

Arvind Ramanathan

Fangfang Xia

Fangfang Xia

Data

Ian Foster

Ian Foster

Robert Underwood

Robert Underwood

Training

Venkat Vishwanath

Venkat Vishwanath

Sam Foreman

Sam Foreman

Evaluation

Franck Cappello

Franck Cappello

Sandeep Madireddy

Sandeep Madireddy

Bo Li

Bo Li

Post

Eliu Huerta

Eliu Huerta

Azton Wells

Azton Wells

Inference

Rajeev Thakur

Rajeev Thakur

Comms

Charlie Catlett

Charlie Catlett

David Martin

David Martin

Distribution

Brad Ullrich

Brad Ullrich

๐Ÿค Teams

  • Planning
  • Data Prep
    • Accumulate 20+ T tokens of high-quality scientific text and structured data
  • Models / Training2
    • Train (entirely from scratch) a series of models on publicly available data
  • Evaluation
    • Skills, trustworthiness, safety, robustness, privacy, machine ethics
  • Post-Training
    • Fine-tuning, alignment
  • Inference
    • Model serving, API development / public-facing web services
  • Distribution
    • Licensing, generating and distributing artifacts for public consumption
  • Communication

๐Ÿฆœ Model Training

โœ… Goals

  • Want training runs at scale to be:
    • efficient
    • stable
    • reproducible
  • This requires:
    • robust data pipelines / file IO
    • effectively overlapping compute with communication
    • stability across {network, filesystem, machine}
  • 3D / Multi-dimensional Parallelism strategies
  • Large batch training
  • Second order optimizers
  • Sub-quadratic attention
  • State space models
  • Highly optimized GPU kernels

โŒ Challenges

  • Looong time to train, can be:
    • weeks (even months) of continuous training
    • order of magnitude longer than typical NN training jobs
  • Stability issues:
    • failures are expensive (but inevitable)
    • stragglers common at scale
  • Individual jobs are:
    • fragile
    • only as good as the worst rank
    • one hang or bad worker can crash job
    • network / filesystem / other-user(s) dependent
  • Cost / benefits of different collective communication algorithms
    • depend on optimized / efficient implementations
  • Network performance
  • Highly optimized GPU kernels

๐Ÿš€ Accelerating Dataset Processing at Scale for Training

  • To train a fixed model on trillions of tokens requires:
    • Aggregating data from multiple different corpora (e.g. Reddit, StackExchange, GitHub, etc.)
    • Sampling each training batch according to a fixed distribution across corpora
    • Building indices that map batches of tokens into these files (indexing)
  • The original implementation was slow, and designed to run on a single device
    • Major bottleneck when debugging data pipeline at scale

๐Ÿš€ Accelerating Dataset Processing: Results

Figure 9: Time spent building BlendableDataset
Figure 10: Time spent building GPTDataset

๐Ÿ““ References

โค๏ธ Thank you!

  • Organizers

  • Feel free to reach out!

๐Ÿ™ Acknowledgements

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

๐Ÿ“‘ Bibliography

Song, Shuaiwen Leon, Bonnie Kruft, Minjia Zhang, Conglong Li, Shiyang Chen, Chengming Zhang, Masahiro Tanaka, et al. 2023. โ€œDeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery Through Sophisticated AI System Technologies.โ€ https://arxiv.org/abs/2310.04610.
Wei, Jason, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, et al. 2022. โ€œEmergent Abilities of Large Language Models.โ€ https://arxiv.org/abs/2206.07682.
Yang, Jingfeng, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. 2023. โ€œHarnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond.โ€ https://arxiv.org/abs/2304.13712.

๐ŸŽ Extras

๐Ÿš‚ Loooooooooong Sequence Lengths

25B

33B
Figure 11: Maximum (achievable) SEQ_LEN for both 25B and 33B models (See: Song et al. (2023))

โ™ป๏ธ Life Cycle of the LLM

Figure 12: Pre-training: Virtually all of the compute used during pretraining phase
Figure 13: Fine-tuning: Fine-tuning actually updates the modelโ€™s weights to make the model better at a certain task.

๐ŸŽ Training LLMs

Figure 14: Itโ€™s hungry!
Figure 15: Visualization from Yang et al. (2023)

Footnotes

  1. Leadโ†ฉ๏ธŽ

  2. Co-led by: Venkat Vishwanath, Sam Foremanโ†ฉ๏ธŽ

Citation

BibTeX citation:
@unpublished{foreman2024,
  author = {Foreman, Sam},
  title = {AuroraGPT},
  date = {2024-10-30},
  url = {https://samforeman.me/talks/AuroraGPT/alcf-hpc-workshop-2024/slides},
  langid = {en}
}
For attribution, please cite this work as:
Foreman, Sam. 2024. โ€œAuroraGPT.โ€ October 30. https://samforeman.me/talks/AuroraGPT/alcf-hpc-workshop-2024/slides.