Draft

🍋 ezpz

Blog post about the ezpz package
Author
Affiliation
Published

January 10, 2026

Modified

January 14, 2026

In ancient times, back in ~ 2022–2023, virtually all (production) PyTorch code was designed to run on NVIDIA GPUs.

In April 2023, AMD announced day-zero support for PyTorch 2.0 within the ROCm 6.0 ecosystem, leveraging new features like TorchDynamo for performance

2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026Torch7 era and early CUDA to HIP ports ROCm 1.0 and HIPIFY tooling Initial PyTorch contributions Intel Extension for PyTorch launch Official PyTorch ROCm Python packages PyTorch Foundation governance participation VTune ITT API integration in PyTorch Triton ecosystem support PyTorch Foundation Premier membership MI300x PyTorch guidance Prototype native Intel GPU support Solid native Intel GPU support IPEX feature upstreaming completion Intel Extension for PyTorch end of life AMD ROCm and PyTorchIntel and PyTorchAMD and Intel PyTorch Enablement Timeline

2022 2023 2024 2025 2026Installable PyTorch ROCm Python packages 1.8 ROCm marked stable 1.12 2.0 2.4 Incremental Intel GPU improvements begin 2.5 Native Intel GPU support announced in PyTorch 2.5 2.6 2.7 Intel GPU eager and compile parity in PyTorch 2.7 Intel XCCL Backend introduced in PyTorch 2.8 2.8 IPEX discontinued 2.9 2.10 IPEX end of life AMDPyTorch ReleasesIntelPyTorch Vendor Integration Timeline AMD vs Intel

AMD Timeline

  • Pre-2021: Early Efforts and Torch7
    • 2012: Torch7 was released, a precursor to PyTorch, written in C++ and CUDA.
    • ROCm 1.0: AMD demonstrated the ability to port CUDA code to HIP (AMD’s C++ dialect for GPU computing) using the HIPIFY tool, including ports of Caffe and Torch7.
  • 2021-2022: Official Support and Foundation
    • March 2021: PyTorch for the AMD ROCm platform became officially available as a Python package, simplifying installation on supported Linux systems.
    • September 2022: The PyTorch project joined the independent Linux Foundation, with AMD participating as a founding member of the PyTorch Foundation governing board.
  • 2023: PyTorch 2.0 Integration
    • April 2023: AMD announced day-zero support for PyTorch 2.0 within the ROCm 6.0 ecosystem, leveraging new features like TorchDynamo for performance improvements.
    • OpenAI Triton Support: The ecosystem grew to include support for OpenAI Triton, a key component for high-performance AI workloads.
  • 2024-2025: Expanding Accessibility (Windows & Consumer GPUs)
    • June 2024: AMD released guides and information on running PyTorch models on AMD MI300x systems, highlighting near drop-in compatibility with code written for Nvidia GPUs.
    • September 2025: AMD released a public preview of PyTorch on Windows, enabling native AI inference on select consumer Radeon RX 7000 and 9000 series GPUs and Ryzen AI APUs, without needing workarounds like WSL2.
    • October 2024: AMD released a “how-to” guide for using Torchtune, a PyTorch library for fine-tuning LLMs, on AMD GPUs.
    • November 2025: Release of AMD Software: PyTorch on Windows Edition 7.1.1, featuring an update to AMD ROCm 7.1.1.
  • Future/Upcoming
    • 2026: AMD is working on its next generation MI450X rack-scale solution, which aims to be competitive with NVIDIA’s high-end offerings by the second half of 2026.
    • Post-2026: The company has also detailed plans for future MI500 series data center GPUs, targeting a significant increase in AI performance

Intel Timeline

  • 2018: Intel begins contributing to the open-source PyTorch framework.
  • 2020: The Intel® Extension for PyTorch* (IPEX) is launched as a separate package to provide optimized performance on Intel CPUs and GPUs.
  • October 2022: PyTorch 1.13 is released with integrated support for Intel® VTune™ Profiler’s ITT APIs.
  • August 2023: Intel joins the PyTorch Foundation as a Premier member, deepening its commitment to the ecosystem.
  • July 2024: PyTorch 2.4 debuts with initial (prototype) native support for Intel GPUs (client and data center).
  • April 2025: PyTorch 2.7 establishes a solid foundation for Intel GPU support in both eager and graph modes (torch.compile) on Windows and Linux.
  • August 2025: Active development of the separate Intel® Extension for PyTorch* ceases following the PyTorch 2.8 release, as most features are now upstreamed into the main PyTorch project.
  • End of March 2026 (Planned): The Intel® Extension for PyTorch* project will officially reach end-of-life. Users are strongly recommended to use native PyTorch directly.

This made sense at the time, as NVIDIA had the vast majority of the GPU market share and was the only major GPU manufacturer.

This was before the advent of

we were still in the early days of trying to run PyTorch on

I’ve been working on the 🍋 ezpz package for a while now,

Footnotes

  1. Even now, in 2026, a lot of code is still NVIDIA-centric and is rarely designed with multi-platform support in mind.↩︎

  2. PyTorch 1.13 release↩︎

  3. Intel Joins the PyTorch Foundation↩︎

Citation

BibTeX citation:
@online{foreman2026,
  author = {Foreman, Sam},
  title = {🍋 \textless Code\textgreater
    ezpz\textless/Code\textgreater{}},
  date = {2026-01-10},
  url = {https://samforeman.me/posts/2026/01/10/},
  langid = {en}
}
For attribution, please cite this work as:
Foreman, Sam. 2026. “🍋 <Code>ezpz</Code>.” January 10, 2026. https://samforeman.me/posts/2026/01/10/.