🐦 Twitter Post Details

Viewing enriched Twitter post

@_philschmid

Transformers Just Got Faster!⚡️ 🚀 Thrilled to announce native Flash Attention (FA) 2 support in Hugging Face Transformers to speed up training and inference for transformer models like LLaMA and Falcon up to 2x. 🦙🦅 👉  https://t.co/MXy0iIykD7 🧶 https://t.co/PCFS8A57ig

Media 1

📊 Media Metadata

{
  "media": [
    {
      "type": "photo",
      "url": "https://pbs.twimg.com/media/F64ZZgVWIAAqp4y.jpg",
      "media_url": "https://pbs.twimg.com/media/F64ZZgVWIAAqp4y.jpg",
      "filename": "media_0.jpg"
    }
  ],
  "conversion_date": "2025-08-13T00:34:39.666068",
  "format_converted": true,
  "original_structure": "had_media_only",
  "enhanced_from_raw_response": true,
  "enhanced_at": "2025-08-13T17:20:00Z"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2019-06-18T18:39:49.000Z",
    "default_profile_image": false,
    "description": "Tech Lead at @huggingface 👨🏻‍💻 🤗  and AWS ML Hero 🦸🏻\n| Cloud & ML enthusiast | 📍Nuremberg  |  🇩🇪 https://t.co/l1ppq3q3hk",
    "fast_followers_count": 0,
    "favourites_count": 3276,
    "followers_count": 8026,
    "friends_count": 427,
    "has_custom_timelines": false,
    "is_translator": false,
    "listed_count": 225,
    "location": "Nürnberg",
    "media_count": 203,
    "name": "Philipp Schmid",
    "normal_followers_count": 8026,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/1141052916570214400/1582380032",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1663943981728686083/C0_GQDvW_normal.jpg",
    "screen_name": "_philschmid",
    "statuses_count": 930,
    "translator_type": "none",
    "url": "https://t.co/8BDXIK6omb",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "1141052916570214400"
  },
  "id": "1706329562794066200",
  "conversation_id": "1706329562794066200",
  "full_text": "Transformers Just Got Faster!⚡️ 🚀 Thrilled to announce native Flash Attention (FA) 2 support in Hugging Face Transformers to speed up training and inference for transformer models like LLaMA and Falcon up to 2x. 🦙🦅\n\n👉  https://t.co/MXy0iIykD7\n\n🧶 https://t.co/PCFS8A57ig",
  "reply_count": 2,
  "retweet_count": 54,
  "favorite_count": 214,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/MXy0iIykD7",
      "expanded_url": "https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flash-attention-2",
      "display_url": "huggingface.co/docs/transform…"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F64ZZgVWIAAqp4y.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/_philschmid/status/1706329562794066200",
  "created_at": "2023-09-25T15:27:24.000Z",
  "#sort_index": "1706329562794066200",
  "view_count": 27793,
  "quote_count": 2,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": false,
  "startUrl": "https://twitter.com/_philschmid/status/1706329562794066200"
}