April 23, 2025
PyTorch 2.7 Release
We are excited to announce the release of PyTorch® 2.7 (release notes)! This release features:
April 08, 2025
Accelerating Whisper on Arm with PyTorch and Hugging Face Transformers
Automatic speech recognition (ASR) has revolutionized how we interact with technology, clearing the way for applications like real-time audio transcription, voice assistants, and accessibility tools. OpenAI Whisper is a powerful model for ASR, capable of multilingual speech recognition and translation.
April 03, 2025
PyTorch Day France 2025: Call For Proposals Open
We’re pleased to announce PyTorch Day France 2025, a dedicated gathering of the PyTorch community held 7 May 2025 in Paris, France. Proudly hosted by the PyTorch Foundation and co-located with GOSIM AI Paris 2025, this event will bring together developers, researchers, and practitioners driving innovation in open source AI and machine learning.
March 19, 2025
PyTorch Day China 2025 Call for Proposals Open
We’re excited to announce the first-ever PyTorch Day China! This new event, hosted by the PyTorch Foundation, will take place on June 7 in Beijing, China, bringing together AI practitioners, researchers, and industry professionals to explore the latest advancements in open source AI and machine learning. Co-located with the BAAI Conference, PyTorch Day China is a chance to connect with the community, share knowledge, and help shape the future of deep learning.
March 13, 2025
Introducing the New PyTorch Landscape: Your Guide to the PyTorch Ecosystem
We’re excited to reveal our brand new PyTorch Landscape. The PyTorch Landscape helps researchers, developers, and organizations easily locate useful, curated, community-built tools that augment the PyTorch core framework.
March 11, 2025
Scaling Recommendation Systems Training to Thousands of GPUs with 2D Sparse Parallelism
At Meta, recommendation systems are the cornerstone of delivering relevant and personalized ads to billions of users globally. Through technologies like PyTorch’s TorchRec, we’ve successfully developed solutions that enable model training across hundreds of GPUs. While these systems have served us well, recent research on scaling laws has revealed a compelling opportunity: we can achieve significantly better model performance by training dramatically larger neural networks.
March 06, 2025
Peak Performance, Minimized Memory: Optimizing torchtune’s performance with torch.compile & Liger Kernel
LinkedIn: Shivam Sahni, Byron Hsu, Yanning Chen Meta: Ankith Gunapal, Evan Smothers