🐦 Twitter Post Details

Viewing enriched Twitter post

@LiorOnAI

A 35 billion parameter model just beat a 235 billion parameter model. That's not supposed to happen. Qwen3.5-35B-A3B now outperforms its predecessor that had 6x more total parameters, and it does so while using 7x fewer active parameters per token. The breakthrough isn't efficiency for efficiency's sake. It's proof that three specific techniques can compress intelligence better than brute-force scaling: 1. Hybrid attention layers that mix linear attention (fast, scales to long contexts) with standard attention (accurate, catches nuance) in a 3:1 ratio 2. Ultra-sparse experts where only 3 billion of 35 billion parameters activate per token, but those 3 billion are chosen by a router trained on higher-quality data 3. Reinforcement learning scaled across millions of simulated agent environments, not just text prediction The result is a model architecture where intelligence comes from better routing decisions, not bigger weight matrices. This unlocks four things that weren't practical before: 1. Running frontier-class reasoning on a single GPU node instead of a cluster 2. Serving 1 million token contexts in production without exploding costs 3. Building agents that can handle complex tool use without the latency penalty of dense models 4. Fine-tuning on domain data without needing to update 200+ billion parameters If this pattern holds, the next 18 months will belong to teams optimizing routing and data quality, not teams with the biggest GPU budgets.

šŸ“Š Media Metadata

{
  "score": 0.42,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.12,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-03-01T12:17:26.282772",
  "import_source": "api_import",
  "source_tagged_at": "2026-03-01T12:17:26.282783",
  "enriched": true,
  "enriched_at": "2026-03-01T12:17:26.282785"
}

šŸ”§ Raw API Response

{
  "type": "tweet",
  "id": "2026427340306403836",
  "url": "https://x.com/LiorOnAI/status/2026427340306403836",
  "twitterUrl": "https://twitter.com/LiorOnAI/status/2026427340306403836",
  "text": "A 35 billion parameter model just beat a 235 billion parameter model. \n\nThat's not supposed to happen. \n\nQwen3.5-35B-A3B now outperforms its predecessor that had 6x more total parameters, and it does so while using 7x fewer active parameters per token.\n\nThe breakthrough isn't efficiency for efficiency's sake. \n\nIt's proof that three specific techniques can compress intelligence better than brute-force scaling:\n\n1. Hybrid attention layers that mix linear attention (fast, scales to long contexts) with standard attention (accurate, catches nuance) in a 3:1 ratio\n\n2. Ultra-sparse experts where only 3 billion of 35 billion parameters activate per token, but those 3 billion are chosen by a router trained on higher-quality data\n\n3. Reinforcement learning scaled across millions of simulated agent environments, not just text prediction\n\nThe result is a model architecture where intelligence comes from better routing decisions, not bigger weight matrices.\n\nThis unlocks four things that weren't practical before:\n\n1. Running frontier-class reasoning on a single GPU node instead of a cluster\n\n2. Serving 1 million token contexts in production without exploding costs\n\n3. Building agents that can handle complex tool use without the latency penalty of dense models\n\n4. Fine-tuning on domain data without needing to update 200+ billion parameters\n\nIf this pattern holds, the next 18 months will belong to teams optimizing routing and data quality, not teams with the biggest GPU budgets.",
  "source": "Twitter for iPhone",
  "retweetCount": 43,
  "replyCount": 23,
  "likeCount": 457,
  "quoteCount": 6,
  "viewCount": 49545,
  "createdAt": "Tue Feb 24 22:41:41 +0000 2026",
  "lang": "en",
  "bookmarkCount": 211,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2026427340306403836",
  "displayTextRange": [
    0,
    276
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "LiorOnAI",
    "url": "https://x.com/LiorOnAI",
    "twitterUrl": "https://twitter.com/LiorOnAI",
    "id": "931470139",
    "name": "Lior Alexander",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/2027106343283527680/lh729xEs_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/931470139/1761077189",
    "description": "",
    "location": "",
    "followers": 112932,
    "following": 2153,
    "status": "",
    "canDm": true,
    "canMediaTag": false,
    "createdAt": "Wed Nov 07 07:19:36 +0000 2012",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 6770,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 661,
    "statusesCount": 3756,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {
      "description": "Covering the latest news for AI devs • Founder @AlphaSignalAI (270k users) •  ML Eng since 2017 • Ex-Mila • MIT",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                47,
                61
              ],
              "name": "",
              "screen_name": "AlphaSignalAI"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "alphasignal.ai",
              "expanded_url": "https://alphasignal.ai",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/AyubevadmD"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": {
    "type": "tweet",
    "id": "2026339351530188939",
    "url": "https://x.com/Alibaba_Qwen/status/2026339351530188939",
    "twitterUrl": "https://twitter.com/Alibaba_Qwen/status/2026339351530188939",
    "text": "šŸš€ Introducing the Qwen 3.5 Medium Model Series\nQwen3.5-Flash Ā· Qwen3.5-35B-A3B Ā· Qwen3.5-122B-A10B Ā· Qwen3.5-27B\n\n✨ More intelligence, less compute.\n• Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts.\n\n• Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios.\n\n• Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring:\n– 1M context length by default\n– Official built-in tools\n\nšŸ”— Hugging Face: https://t.co/wFMdX5pDjU\n\nšŸ”— ModelScope: https://t.co/9NGXcIdCWI\n\nšŸ”— Qwen3.5-Flash API: https://t.co/82ESSpaqAF\n\nTry in Qwen Chat šŸ‘‡\nFlash: https://t.co/UkTL3JZxIK\n\n27B: https://t.co/haKxG4lETy\n\n35B-A3B: https://t.co/Oc1lYSTbwh\n\n122B-A10B: https://t.co/hBMODXmh1o\n\nWould love to hear what you build with it.",
    "source": "Twitter for iPhone",
    "retweetCount": 1105,
    "replyCount": 434,
    "likeCount": 7846,
    "quoteCount": 476,
    "viewCount": 3662957,
    "createdAt": "Tue Feb 24 16:52:03 +0000 2026",
    "lang": "en",
    "bookmarkCount": 5586,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "2026339351530188939",
    "displayTextRange": [
      0,
      277
    ],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {
      "type": "user",
      "userName": "Alibaba_Qwen",
      "url": "https://x.com/Alibaba_Qwen",
      "twitterUrl": "https://twitter.com/Alibaba_Qwen",
      "id": "1753339277386342400",
      "name": "Qwen",
      "isVerified": false,
      "isBlueVerified": true,
      "verifiedType": "Business",
      "profilePicture": "https://pbs.twimg.com/profile_images/1894073235379273728/0ROUmdkE_normal.jpg",
      "coverPicture": "https://pbs.twimg.com/profile_banners/1753339277386342400/1731637054",
      "description": "",
      "location": "",
      "followers": 154742,
      "following": 5,
      "status": "",
      "canDm": false,
      "canMediaTag": true,
      "createdAt": "Fri Feb 02 08:47:32 +0000 2024",
      "entities": {
        "description": {
          "urls": []
        },
        "url": {}
      },
      "fastFollowersCount": 0,
      "favouritesCount": 403,
      "hasCustomTimelines": true,
      "isTranslator": false,
      "mediaCount": 422,
      "statusesCount": 804,
      "withheldInCountries": [],
      "affiliatesHighlightedLabel": {},
      "possiblySensitive": false,
      "pinnedTweetIds": [
        "2026339351530188939"
      ],
      "profile_bio": {
        "description": "Open foundation models for AGI.",
        "entities": {
          "description": {
            "hashtags": [],
            "symbols": [],
            "urls": [],
            "user_mentions": []
          },
          "url": {
            "urls": [
              {
                "display_url": "qwen.ai",
                "expanded_url": "https://qwen.ai/",
                "indices": [
                  0,
                  23
                ],
                "url": "https://t.co/f8hrbNCQR4"
              }
            ]
          }
        }
      },
      "isAutomated": false,
      "automatedBy": null
    },
    "extendedEntities": {
      "media": [
        {
          "allow_download_status": {
            "allow_download": true
          },
          "display_url": "pic.twitter.com/ZWPibMn6at",
          "expanded_url": "https://twitter.com/Alibaba_Qwen/status/2026339351530188939/photo/1",
          "ext_media_availability": {
            "status": "Available"
          },
          "features": {
            "large": {
              "faces": []
            },
            "orig": {
              "faces": []
            }
          },
          "id_str": "2026339001133838336",
          "indices": [
            278,
            301
          ],
          "media_key": "3_2026339001133838336",
          "media_results": {
            "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARwfAJXlGjAACgACHB8A53paMIsAAA==",
            "result": {
              "__typename": "ApiMedia",
              "id": "QXBpTWVkaWE6DAABCgABHB8AleUaMAAKAAIcHwDnelowiwAA",
              "media_key": "3_2026339001133838336"
            }
          },
          "media_url_https": "https://pbs.twimg.com/media/HB8AleUaMAArNyM.jpg",
          "original_info": {
            "focus_rects": [
              {
                "h": 1328,
                "w": 2372,
                "x": 0,
                "y": 0
              },
              {
                "h": 1572,
                "w": 1572,
                "x": 800,
                "y": 0
              },
              {
                "h": 1572,
                "w": 1379,
                "x": 993,
                "y": 0
              },
              {
                "h": 1572,
                "w": 786,
                "x": 1563,
                "y": 0
              },
              {
                "h": 1572,
                "w": 2372,
                "x": 0,
                "y": 0
              }
            ],
            "height": 1572,
            "width": 2372
          },
          "sizes": {
            "large": {
              "h": 1357,
              "w": 2048
            }
          },
          "type": "photo",
          "url": "https://t.co/ZWPibMn6at"
        }
      ]
    },
    "card": null,
    "place": {},
    "entities": {
      "hashtags": [],
      "symbols": [],
      "urls": [
        {
          "display_url": "huggingface.co/collections/Qw…",
          "expanded_url": "https://huggingface.co/collections/Qwen/qwen35",
          "indices": [
            650,
            673
          ],
          "url": "https://t.co/wFMdX5pDjU"
        },
        {
          "display_url": "modelscope.cn/collections/Qw…",
          "expanded_url": "https://modelscope.cn/collections/Qwen/Qwen35",
          "indices": [
            689,
            712
          ],
          "url": "https://t.co/9NGXcIdCWI"
        },
        {
          "display_url": "modelstudio.console.alibabacloud.com/ap-southeast-1…",
          "expanded_url": "https://modelstudio.console.alibabacloud.com/ap-southeast-1/?tab=doc#/doc/?type=model&url=2840914_2&modelId=group-qwen3.5-flash",
          "indices": [
            735,
            758
          ],
          "url": "https://t.co/82ESSpaqAF"
        },
        {
          "display_url": "chat.qwen.ai/?models=qwen3.…",
          "expanded_url": "https://chat.qwen.ai/?models=qwen3.5-flash",
          "indices": [
            786,
            809
          ],
          "url": "https://t.co/UkTL3JZxIK"
        },
        {
          "display_url": "chat.qwen.ai/?models=qwen3.…",
          "expanded_url": "https://chat.qwen.ai/?models=qwen3.5-27b",
          "indices": [
            816,
            839
          ],
          "url": "https://t.co/haKxG4lETy"
        },
        {
          "display_url": "chat.qwen.ai/?models=qwen3.…",
          "expanded_url": "https://chat.qwen.ai/?models=qwen3.5-35b-a3b",
          "indices": [
            850,
            873
          ],
          "url": "https://t.co/Oc1lYSTbwh"
        },
        {
          "display_url": "chat.qwen.ai/?models=qwen3.…",
          "expanded_url": "https://chat.qwen.ai/?models=qwen3.5-122b-a10b",
          "indices": [
            886,
            909
          ],
          "url": "https://t.co/hBMODXmh1o"
        }
      ],
      "user_mentions": []
    },
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "isLimitedReply": false,
    "article": null
  },
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}