🐦 Twitter Post Details

Viewing enriched Twitter post

@PyTorch

We’re excited to share Generalized Dot-Product Attention (GDPA) — a production-driven attention kernel designed specifically for large-scale recommendation systems (RecSys). Proposed in our recent paper, GDPA replaces softmax with a flexible activation tailored for real-world RecSys traffic patterns and has been deployed in Meta’s largest recommendation model, GEM. 🔗 Read our latest blog: https://t.co/YxePbndHlP By redesigning attention around production characteristics rather than benchmark assumptions, GDPA achieves 2× forward speedup (1,145 BF16 TFLOPs, ~97% tensor core utilization), 1.6× backward speedup, and up to 3.5× forward speedup vs. FA4 under short K/V settings on NVIDIA B200. This work demonstrates how real production traffic can fundamentally reshape kernel design. ✍ Jiaqi Xu, Han Xu, Junqing Zhou, Devashish Shankar, Xiaoyi (Leo) Liu, Shuqi Yang #PyTorch #OpenSourceAI #GDPA #GEM

Media 1

📊 Media Metadata

{
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2034311193444094207/media_0.jpg",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-03-18T16:53:17.936965",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2034311193444094207",
  "url": "https://x.com/PyTorch/status/2034311193444094207",
  "twitterUrl": "https://twitter.com/PyTorch/status/2034311193444094207",
  "text": "We’re excited to share Generalized Dot-Product Attention (GDPA) — a production-driven attention kernel designed specifically for large-scale recommendation systems (RecSys). \n\nProposed in our recent paper, GDPA replaces softmax with a flexible activation tailored for real-world RecSys traffic patterns and has been deployed in Meta’s largest recommendation model, GEM. \n\n🔗 Read our latest blog: https://t.co/YxePbndHlP\n\nBy redesigning attention around production characteristics rather than benchmark assumptions, GDPA achieves 2× forward speedup (1,145 BF16 TFLOPs, ~97% tensor core utilization), 1.6× backward speedup, and up to 3.5× forward speedup vs. FA4 under short K/V settings on NVIDIA B200. \n\nThis work demonstrates how real production traffic can fundamentally reshape kernel design.\n\n✍ Jiaqi Xu, Han Xu, Junqing Zhou, Devashish Shankar, Xiaoyi (Leo) Liu, Shuqi Yang\n\n#PyTorch #OpenSourceAI #GDPA #GEM",
  "source": "Twitter for iPhone",
  "retweetCount": 0,
  "replyCount": 0,
  "likeCount": 1,
  "quoteCount": 0,
  "viewCount": 238,
  "createdAt": "Wed Mar 18 16:49:18 +0000 2026",
  "lang": "en",
  "bookmarkCount": 0,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2034311193444094207",
  "displayTextRange": [
    0,
    278
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "PyTorch",
    "url": "https://x.com/PyTorch",
    "twitterUrl": "https://twitter.com/PyTorch",
    "id": "776585502606721024",
    "name": "PyTorch",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1813965160702451712/yXV1vRhr_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/776585502606721024/1761575044",
    "description": "",
    "location": "",
    "followers": 480507,
    "following": 82,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Fri Sep 16 00:56:26 +0000 2016",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 848,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 1349,
    "statusesCount": 3097,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {
      "description": "Tensors and neural networks in Python with strong hardware acceleration. PyTorch is an open source project at the Linux Foundation. #PyTorchFoundation",
      "entities": {
        "description": {
          "hashtags": [
            {
              "indices": [
                132,
                150
              ],
              "text": "PyTorchFoundation"
            }
          ],
          "symbols": [],
          "urls": [],
          "user_mentions": []
        },
        "url": {
          "urls": [
            {
              "display_url": "pytorch.org",
              "expanded_url": "http://pytorch.org",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/6SwTBhUwTJ"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/fIxYEpHqY0",
        "expanded_url": "https://twitter.com/PyTorch/status/2034311193444094207/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "id_str": "2034311172149633024",
        "indices": [
          279,
          302
        ],
        "media_key": "3_2034311172149633024",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARw7UzualmAACgACHDtTQI/WEP8AAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABHDtTO5qWYAAKAAIcO1NAj9YQ/wAA",
            "media_key": "3_2034311172149633024"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/HDtTO5qWYAAtyRP.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 1075,
              "w": 1920,
              "x": 0,
              "y": 0
            },
            {
              "h": 1080,
              "w": 1080,
              "x": 0,
              "y": 0
            },
            {
              "h": 1080,
              "w": 947,
              "x": 0,
              "y": 0
            },
            {
              "h": 1080,
              "w": 540,
              "x": 66,
              "y": 0
            },
            {
              "h": 1080,
              "w": 1920,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1080,
          "width": 1920
        },
        "sizes": {
          "large": {
            "h": 1080,
            "w": 1920
          }
        },
        "type": "photo",
        "url": "https://t.co/fIxYEpHqY0"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [
      {
        "indices": [
          880,
          888
        ],
        "text": "PyTorch"
      },
      {
        "indices": [
          889,
          902
        ],
        "text": "OpenSourceAI"
      },
      {
        "indices": [
          903,
          908
        ],
        "text": "GDPA"
      },
      {
        "indices": [
          909,
          913
        ],
        "text": "GEM"
      }
    ],
    "symbols": [],
    "urls": [
      {
        "display_url": "pytorch.org/blog/generaliz…",
        "expanded_url": "https://pytorch.org/blog/generalized-dot-product-attention-tackling-real-world-challenges-in-gpu-training-kernels/",
        "indices": [
          396,
          419
        ],
        "url": "https://t.co/YxePbndHlP"
      }
    ],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}