🐦 Twitter Post Details

Viewing enriched Twitter post

@LiorOnAI

Alibaba shipped four Qwen 3.5 small models with a trick borrowed from their 397B model: Gated DeltaNet hybrid attention. Three layers of linear attention for every one layer of full attention. The linear layers handle routine computation with constant memory use. The full attention layers fire only when precision matters. This 3:1 ratio keeps memory flat while quality stays high, which is why even the 0.8B model supports a 262,000-token context window. Every model handles text, images, and video natively. No adapter bolted on afterward. The vision encoder uses 3D convolutions to capture motion in video, then merges features from multiple layers instead of just the final one. The 9B beats GPT-5-Nano by 13 points on multimodal understanding, 17 points on visual math, and 30 points on document parsing. The 0.8B runs on a phone and processes video. The 4B fits in 8GB of VRAM and acts as a multimodal agent. All four are Apache 2.0. If this architecture holds, the small model space just became a capability race instead of a size race. A year ago, running a multimodal model locally meant a 13B+ model and a serious GPU. Now a 4B model with 262K context handles text, images, and video from consumer hardware. The gap between edge models and flagship models is closing faster than the gap between flagships and humans.

📊 Media Metadata

{
  "score": 0.42,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.12,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-03-06T07:09:32.042933",
  "import_source": "api_import",
  "source_tagged_at": "2026-03-06T07:09:32.042947",
  "enriched": true,
  "enriched_at": "2026-03-06T07:09:32.042949"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2028558859783311382",
  "url": "https://x.com/LiorOnAI/status/2028558859783311382",
  "twitterUrl": "https://twitter.com/LiorOnAI/status/2028558859783311382",
  "text": "Alibaba shipped four Qwen 3.5 small models with a trick borrowed from their 397B model: Gated DeltaNet hybrid attention. \n\nThree layers of linear attention for every one layer of full attention. \n\nThe linear layers handle routine computation with constant memory use. The full attention layers fire only when precision matters. \n\nThis 3:1 ratio keeps memory flat while quality stays high, which is why even the 0.8B model supports a 262,000-token context window.\n\nEvery model handles text, images, and video natively. \n\nNo adapter bolted on afterward. The vision encoder uses 3D convolutions to capture motion in video, then merges features from multiple layers instead of just the final one. \n\nThe 9B beats GPT-5-Nano by 13 points on multimodal understanding, 17 points on visual math, and 30 points on document parsing. The 0.8B runs on a phone and processes video. The 4B fits in 8GB of VRAM and acts as a multimodal agent. All four are Apache 2.0.\n\nIf this architecture holds, the small model space just became a capability race instead of a size race. \n\nA year ago, running a multimodal model locally meant a 13B+ model and a serious GPU. \n\nNow a 4B model with 262K context handles text, images, and video from consumer hardware. \n\nThe gap between edge models and flagship models is closing faster than the gap between flagships and humans.",
  "source": "Twitter for iPhone",
  "retweetCount": 10,
  "replyCount": 6,
  "likeCount": 52,
  "quoteCount": 0,
  "viewCount": 7853,
  "createdAt": "Mon Mar 02 19:51:35 +0000 2026",
  "lang": "en",
  "bookmarkCount": 27,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2028558859783311382",
  "displayTextRange": [
    0,
    276
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "LiorOnAI",
    "url": "https://x.com/LiorOnAI",
    "twitterUrl": "https://twitter.com/LiorOnAI",
    "id": "931470139",
    "name": "Lior Alexander",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/2027106343283527680/lh729xEs_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/931470139/1761077189",
    "description": "Covering the latest news for AI devs • Founder @AlphaSignalAI (270k users) •  ML Eng since 2017 • Ex-Mila • MIT",
    "location": "",
    "followers": 113157,
    "following": 2163,
    "status": "",
    "canDm": true,
    "canMediaTag": false,
    "createdAt": "Wed Nov 07 07:19:36 +0000 2012",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {
        "urls": [
          {
            "display_url": "alphasignal.ai",
            "expanded_url": "https://alphasignal.ai",
            "indices": [
              0,
              23
            ],
            "url": "https://t.co/AyubevadmD"
          }
        ]
      }
    },
    "fastFollowersCount": 0,
    "favouritesCount": 6800,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 661,
    "statusesCount": 3768,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {},
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": {
    "type": "tweet",
    "id": "2028460046510965160",
    "url": "https://x.com/Alibaba_Qwen/status/2028460046510965160",
    "twitterUrl": "https://twitter.com/Alibaba_Qwen/status/2028460046510965160",
    "text": "🚀 Introducing the Qwen 3.5 Small Model Series\nQwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B\n\n✨ More intelligence, less compute.\nThese small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL:\n• 0.8B / 2B → tiny, fast, great for edge device\n• 4B → a surprisingly strong multimodal base for lightweight agents\n• 9B → compact, but already closing the gap with much larger models\nAnd yes — we’re also releasing the Base models as well.\nWe hope this better supports research, experimentation, and real-world industrial innovation.\nHugging Face: https://t.co/wFMdX5pDjU\nModelScope: https://t.co/9NGXcIdCWI",
    "source": "Twitter for iPhone",
    "retweetCount": 2919,
    "replyCount": 891,
    "likeCount": 21267,
    "quoteCount": 1340,
    "viewCount": 8662108,
    "createdAt": "Mon Mar 02 13:18:56 +0000 2026",
    "lang": "en",
    "bookmarkCount": 14952,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "2028460046510965160",
    "displayTextRange": [
      0,
      274
    ],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {
      "type": "user",
      "userName": "Alibaba_Qwen",
      "url": "https://x.com/Alibaba_Qwen",
      "twitterUrl": "https://twitter.com/Alibaba_Qwen",
      "id": "1753339277386342400",
      "name": "Qwen",
      "isVerified": false,
      "isBlueVerified": true,
      "verifiedType": "Business",
      "profilePicture": "https://pbs.twimg.com/profile_images/1894073235379273728/0ROUmdkE_normal.jpg",
      "coverPicture": "https://pbs.twimg.com/profile_banners/1753339277386342400/1731637054",
      "description": "Open foundation models for AGI.",
      "location": "",
      "followers": 178425,
      "following": 5,
      "status": "",
      "canDm": false,
      "canMediaTag": true,
      "createdAt": "Fri Feb 02 08:47:32 +0000 2024",
      "entities": {
        "description": {
          "urls": []
        },
        "url": {
          "urls": [
            {
              "display_url": "qwen.ai",
              "expanded_url": "https://qwen.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/f8hrbNCQR4"
            }
          ]
        }
      },
      "fastFollowersCount": 0,
      "favouritesCount": 404,
      "hasCustomTimelines": false,
      "isTranslator": false,
      "mediaCount": 425,
      "statusesCount": 811,
      "withheldInCountries": [],
      "affiliatesHighlightedLabel": {},
      "possiblySensitive": false,
      "pinnedTweetIds": [
        "2028460046510965160"
      ],
      "profile_bio": {},
      "isAutomated": false,
      "automatedBy": null
    },
    "extendedEntities": {
      "media": [
        {
          "allow_download_status": {
            "allow_download": true
          },
          "display_url": "pic.x.com/90JfOM9k4T",
          "expanded_url": "https://x.com/Alibaba_Qwen/status/2028460046510965160/photo/1",
          "ext_media_availability": {
            "status": "Available"
          },
          "features": {
            "large": {
              "faces": [
                {
                  "h": 106,
                  "w": 106,
                  "x": 1260,
                  "y": 7
                }
              ]
            },
            "medium": {
              "faces": [
                {
                  "h": 66,
                  "w": 66,
                  "x": 787,
                  "y": 4
                }
              ]
            },
            "orig": {
              "faces": [
                {
                  "h": 106,
                  "w": 106,
                  "x": 1260,
                  "y": 7
                }
              ]
            },
            "small": {
              "faces": [
                {
                  "h": 37,
                  "w": 37,
                  "x": 446,
                  "y": 2
                }
              ]
            }
          },
          "id_str": "2028459990722453504",
          "indices": [
            275,
            298
          ],
          "media_key": "3_2028459990722453504",
          "media_results": {
            "result": {
              "media_key": "3_2028459990722453504"
            }
          },
          "media_url_https": "https://pbs.twimg.com/media/HCaJnUQaoAAaMIc.jpg",
          "original_info": {
            "focus_rects": [
              {
                "h": 910,
                "w": 1625,
                "x": 289,
                "y": 0
              },
              {
                "h": 910,
                "w": 910,
                "x": 646,
                "y": 0
              },
              {
                "h": 910,
                "w": 798,
                "x": 702,
                "y": 0
              },
              {
                "h": 910,
                "w": 455,
                "x": 874,
                "y": 0
              },
              {
                "h": 910,
                "w": 1920,
                "x": 0,
                "y": 0
              }
            ],
            "height": 910,
            "width": 1920
          },
          "sizes": {
            "large": {
              "h": 910,
              "resize": "fit",
              "w": 1920
            },
            "medium": {
              "h": 569,
              "resize": "fit",
              "w": 1200
            },
            "small": {
              "h": 322,
              "resize": "fit",
              "w": 680
            },
            "thumb": {
              "h": 150,
              "resize": "crop",
              "w": 150
            }
          },
          "type": "photo",
          "url": "https://t.co/90JfOM9k4T"
        }
      ]
    },
    "card": null,
    "place": {},
    "entities": {
      "hashtags": [],
      "symbols": [],
      "urls": [
        {
          "display_url": "huggingface.co/collections/Qw…",
          "expanded_url": "https://huggingface.co/collections/Qwen/qwen35",
          "indices": [
            597,
            620
          ],
          "url": "https://t.co/wFMdX5pDjU"
        },
        {
          "display_url": "modelscope.cn/collections/Qw…",
          "expanded_url": "https://modelscope.cn/collections/Qwen/Qwen35",
          "indices": [
            633,
            656
          ],
          "url": "https://t.co/9NGXcIdCWI"
        }
      ],
      "user_mentions": []
    },
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "article": null
  },
  "retweeted_tweet": null,
  "article": null
}