🐦 Twitter Post Details

Viewing enriched Twitter post

@rasbt

Flagship open-weight release days are always exciting. Was just reading through the Gemma 4 reports, configs, and code, and here are my takeaways: Architecture-wise, besides multi-model support, Gemma 4 (31B) looks pretty much unchanged compared to Gemma 3 (27B). Gemma 4 maintains a relatively unique Pre- and Post-norm setup and remains relatively classic, with a 5:1 hybrid attention mechanism combining a sliding-window (local) layer and a full-attention (global) layer. The attention mechanism itself is also classic Grouped Query Attention (GQA). But let’s not be fooled by the lack of architectural changes. Looking at the benchmarks, Gemma 4 is a huge leap from Gemma 3. This is likely due to the training set and recipe. Interestingly, on the AI Arena Leaderboard, Gemma 4 (31B) ranks similarly to the much larger Qwen3.5-397B-A17B model. But as I discussed in my model evaluation article, arena scores are a bit problematic as they can be gamed and are biased towards human (style) preference. If we look at some other common benchmarks, which I plotted below, we can see that it’s indeed a very clear leap over Gemma 3 and ranks on par with Qwen3.5 27B. Note that there is also a Mixture-of-Experts (MoE) Gemma 4 variant that is slightly smaller (27B  with 4 billion parameters active. The benchmarks are only slightly worse compared to Gemma 4 (31B). I omitted the MoE architecture in the figure below because the figure is already very crowded, but you can find it in my LLM Architecture Gallery. Anyways, overall, it's a nice and strong model release and a strong contender for local usage. Also, one aspect that should not be underrated is that (it seems) the model is now released with a standard Apache 2.0 open-source license, which has much friendlier usage terms than the custom Gemma 3 license.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2039780905619705902/media_0.jpg",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-04-02T19:05:42.813234",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2039780905619705902",
  "url": "https://x.com/rasbt/status/2039780905619705902",
  "twitterUrl": "https://twitter.com/rasbt/status/2039780905619705902",
  "text": "Flagship open-weight release days are always exciting. Was just reading through the Gemma 4 reports, configs, and code, and here are my takeaways:\n\nArchitecture-wise, besides multi-model support, Gemma 4 (31B) looks pretty much unchanged compared to Gemma 3 (27B).\n\nGemma 4 maintains a relatively unique Pre- and Post-norm setup and remains relatively classic, with a 5:1 hybrid attention mechanism combining a sliding-window (local) layer and a full-attention (global) layer. The attention mechanism itself is also classic Grouped Query Attention (GQA).\n\nBut let’s not be fooled by the lack of architectural changes. Looking at the benchmarks, Gemma 4 is a huge leap from Gemma 3. This is likely due to the training set and recipe.\n\nInterestingly, on the AI Arena Leaderboard, Gemma 4 (31B) ranks similarly to the much larger Qwen3.5-397B-A17B model. But as I discussed in my model evaluation article, arena scores are a bit problematic as they can be gamed and are biased towards human (style) preference.\n\nIf we look at some other common benchmarks, which I plotted below, we can see that it’s indeed a very clear leap over Gemma 3 and ranks on par with Qwen3.5 27B.\n\nNote that there is also a Mixture-of-Experts (MoE) Gemma 4 variant that is slightly smaller (27B  with 4 billion parameters active. The benchmarks are only slightly worse compared to Gemma 4 (31B).\n\nI omitted the MoE architecture in the figure below because the figure is already very crowded, but you can find it in my LLM Architecture Gallery.\n\nAnyways, overall, it's a nice and strong model release and a strong contender for local usage. Also, one aspect that should not be underrated is that (it seems) the model is now released with a standard Apache 2.0 open-source license, which has much friendlier usage terms than the custom Gemma 3 license.",
  "source": "Twitter for iPhone",
  "retweetCount": 0,
  "replyCount": 1,
  "likeCount": 2,
  "quoteCount": 0,
  "viewCount": 140,
  "createdAt": "Thu Apr 02 19:03:59 +0000 2026",
  "lang": "en",
  "bookmarkCount": 2,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2039780905619705902",
  "displayTextRange": [
    0,
    273
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "rasbt",
    "url": "https://x.com/rasbt",
    "twitterUrl": "https://twitter.com/rasbt",
    "id": "865622395",
    "name": "Sebastian Raschka",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1661187442043486209/a3E4t1eV_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/865622395/1742309979",
    "description": "",
    "location": "United States",
    "followers": 414454,
    "following": 1142,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Sun Oct 07 02:06:16 +0000 2012",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 24662,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 2094,
    "statusesCount": 19607,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1999847254367117736"
    ],
    "profile_bio": {
      "description": "ML/AI research engineer. Ex stats professor.\nAuthor of \"Build a Large Language Model From Scratch\" (https://t.co/O8LAAMRzzW) & reasoning (https://t.co/5TueQKx2Fk)",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [
            {
              "display_url": "amzn.to/4fqvn0D",
              "expanded_url": "https://amzn.to/4fqvn0D",
              "indices": [
                100,
                123
              ],
              "url": "https://t.co/O8LAAMRzzW"
            },
            {
              "display_url": "mng.bz/lZ5B",
              "expanded_url": "https://mng.bz/lZ5B",
              "indices": [
                138,
                161
              ],
              "url": "https://t.co/5TueQKx2Fk"
            }
          ],
          "user_mentions": []
        },
        "url": {
          "urls": [
            {
              "display_url": "sebastianraschka.com",
              "expanded_url": "https://sebastianraschka.com",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/HrtQQ5tgJl"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/sZ77pZho95",
        "expanded_url": "https://twitter.com/rasbt/status/2039780905619705902/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "h": 336,
                "w": 336,
                "x": 1656,
                "y": 298
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "h": 672,
                "w": 672,
                "x": 3312,
                "y": 596
              }
            ]
          }
        },
        "id_str": "2039780834064936963",
        "indices": [
          274,
          297
        ],
        "media_key": "3_2039780834064936963",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARxOwdyCFzADCgACHE7B7SsWUC4AAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABHE7B3IIXMAMKAAIcTsHtKxZQLgAA",
            "media_key": "3_2039780834064936963"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/HE7B3IIXMAMuz5w.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 2294,
              "w": 4096,
              "x": 0,
              "y": 1802
            },
            {
              "h": 4096,
              "w": 4096,
              "x": 0,
              "y": 0
            },
            {
              "h": 4096,
              "w": 3593,
              "x": 503,
              "y": 0
            },
            {
              "h": 4096,
              "w": 2048,
              "x": 1331,
              "y": 0
            },
            {
              "h": 4096,
              "w": 4096,
              "x": 0,
              "y": 0
            }
          ],
          "height": 4096,
          "width": 4096
        },
        "sizes": {
          "large": {
            "h": 2048,
            "w": 2048
          }
        },
        "type": "photo",
        "url": "https://t.co/sZ77pZho95"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}