Foundation Models for Science
@ Foundation Models for the Electric Grid
Sam Foreman
2025-02-12
AuroraGPT: General purpose scientific LLM
Broadly trained on a general corpora plus scientific {papers, texts, data}
Awesome-LLM
Datasets and data pipelines for preparing science training data
Software infrastructure and workflows to train, evaluate and deploy LLMs at scale for scientific resarch purposes
Racks | 166 |
Nodes | 10,624 |
CPUs | 21,248 |
GPUs | 63,744 |
NICs | 84,992 |
HBM | 8 PB |
DDR5c | 10 PB |
Up to ≈ 25× throughput improvement for genomic FMs with 6.5× energy efficiency
✅ Goal: Assemble a large corpus of documents (general and scientific) to train and fine-tune AuroraGPT models
The original implementation was slow:
✅ Goals
❌ Challenges
Megatron-DeepSpeed
ezpz
🙏 Acknowledgements
This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
SEQ_LEN
for both 25B
and 33B
models (See: Song et al. (2023))