@PyTorch
PyTorch 2.11 features improvements for distributed training and hardware-specific operator support. Join Andrey Talman and Nikita Shulga on Tuesday, March 31st at 10 am for a live update and Q&A. Topics: - Differentiable Collectives for Distributed Training - FlexAttention: Now includes a FlashAttention-4 backend on Hopper and Blackwell GPUs - MPS (Apple Silicon): Comprehensive operator expansion - RNN/LSTM GPU Export Support - XPU Graph Register: https://t.co/eJUPu4m4K5 #PyTorch #OpenSource #AIInfrastructure