🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Major release from DeepSeek. And a big deal for open-source LLMs. DeepSeek-V3.2-Speciale is on par with Gemini-3-Pro on the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI). It even surpasses the Gemini 3 Pro on several benchmarks. DeepSeek identifies three critical bottlenecks: > vanilla attention mechanisms that choke on long sequences, > insufficient post-training compute, > and weak generalization in agentic scenarios. They introduce DeepSeek-V3.2, a model that tackles all three problems simultaneously. One key innovation is DeepSeek Sparse Attention (DSA), which reduces attention complexity from O(L²) to O(Lk) where k is far smaller than the sequence length. A lightweight "lightning indexer" scores which tokens matter, then only those top-k tokens get full attention. The result: significant speedups on long contexts without sacrificing performance. But architecture alone isn't enough. DeepSeek allocates post-training compute exceeding 10% of the pre-training cost, a massive RL investment that directly translates to reasoning capability. For agentic tasks, they built an automatic environment-synthesis pipeline generating 1,827 distinct task environments and 85,000+ complex prompts. Code agents, search agents, and general planning tasks (all synthesized at scale for RL training) The numbers: On AIME 2025, DeepSeek-V3.2 hits 93.1% (GPT-5-High: 94.6%). On SWE-Verified, 73.1% resolved. On HLE text-only, 25.1% compared to GPT-5's 26.3%. Their high-compute variant, DeepSeek-V3.2-Speciale, goes further, achieving gold medals in IMO 2025 (35/42 points), IOI 2025 (492/600), and ICPC World Finals 2025 (10/12 problems solved). This is the first open model to credibly compete with frontier proprietary systems across reasoning, coding, and agentic benchmarks.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1995509721605038475/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1995509721605038475/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-04T20:37:58.656293",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "1995509721605038475",
  "url": "https://x.com/omarsar0/status/1995509721605038475",
  "twitterUrl": "https://twitter.com/omarsar0/status/1995509721605038475",
  "text": "Major release from DeepSeek.\n\nAnd a big deal for open-source LLMs.\n\nDeepSeek-V3.2-Speciale is on par with Gemini-3-Pro on the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI).\n\nIt even surpasses the Gemini 3 Pro on several benchmarks.\n\nDeepSeek identifies three critical bottlenecks:\n\n> vanilla attention mechanisms that choke on long sequences,\n> insufficient post-training compute,\n> and weak generalization in agentic scenarios.\n\nThey introduce DeepSeek-V3.2, a model that tackles all three problems simultaneously.\n\nOne key innovation is DeepSeek Sparse Attention (DSA), which reduces attention complexity from O(L²) to O(Lk) where k is far smaller than the sequence length. A lightweight \"lightning indexer\" scores which tokens matter, then only those top-k tokens get full attention.\n\nThe result: significant speedups on long contexts without sacrificing performance.\n\nBut architecture alone isn't enough. DeepSeek allocates post-training compute exceeding 10% of the pre-training cost, a massive RL investment that directly translates to reasoning capability.\n\nFor agentic tasks, they built an automatic environment-synthesis pipeline generating 1,827 distinct task environments and 85,000+ complex prompts. Code agents, search agents, and general planning tasks (all synthesized at scale for RL training)\n\nThe numbers: On AIME 2025, DeepSeek-V3.2 hits 93.1% (GPT-5-High: 94.6%). On SWE-Verified, 73.1% resolved. On HLE text-only, 25.1% compared to GPT-5's 26.3%.\n\nTheir high-compute variant, DeepSeek-V3.2-Speciale, goes further, achieving gold medals in IMO 2025 (35/42 points), IOI 2025 (492/600), and ICPC World Finals 2025 (10/12 problems solved).\n\nThis is the first open model to credibly compete with frontier proprietary systems across reasoning, coding, and agentic benchmarks.",
  "source": "Twitter for iPhone",
  "retweetCount": 54,
  "replyCount": 17,
  "likeCount": 318,
  "quoteCount": 3,
  "viewCount": 31019,
  "createdAt": "Mon Dec 01 15:06:07 +0000 2025",
  "lang": "en",
  "bookmarkCount": 148,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1995509721605038475",
  "displayTextRange": [
    0,
    273
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "omarsar0",
    "url": "https://x.com/omarsar0",
    "twitterUrl": "https://twitter.com/omarsar0",
    "id": "3448284313",
    "name": "elvis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "description": "",
    "location": "DAIR.AI Academy",
    "followers": 277867,
    "following": 724,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 33719,
    "hasCustomTimelines": true,
    "isTranslator": true,
    "mediaCount": 4356,
    "statusesCount": 16656,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1996595107924263287"
    ],
    "profile_bio": {
      "description": "Building agents @dair_ai • Ex Meta AI, Elastic, PhD • Sharing research & insights on AI Agents • New cohort: https://t.co/tn8LKG5d20",
      "entities": {
        "description": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com/courses/claude…",
              "expanded_url": "https://dair-ai.thinkific.com/courses/claude-code",
              "indices": [
                109,
                132
              ],
              "url": "https://t.co/tn8LKG5d20"
            }
          ],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                16,
                24
              ],
              "name": "",
              "screen_name": "dair_ai"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com",
              "expanded_url": "https://dair-ai.thinkific.com/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/JBU5beHQNs"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/9j9ZkJdbHx",
        "expanded_url": "https://twitter.com/omarsar0/status/1995509721605038475/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "1995509718186618881",
        "indices": [
          274,
          297
        ],
        "media_key": "3_1995509718186618881",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARuxeYQaWnABCgACG7F5hOYbYYsAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG7F5hBpacAEKAAIbsXmE5hthiwAA",
            "media_key": "3_1995509718186618881"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G7F5hBpacAEqixv.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 864,
              "w": 1542,
              "x": 0,
              "y": 0
            },
            {
              "h": 1542,
              "w": 1542,
              "x": 0,
              "y": 0
            },
            {
              "h": 1758,
              "w": 1542,
              "x": 0,
              "y": 0
            },
            {
              "h": 1998,
              "w": 999,
              "x": 250,
              "y": 0
            },
            {
              "h": 1998,
              "w": 1542,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1998,
          "width": 1542
        },
        "sizes": {
          "large": {
            "h": 1998,
            "w": 1542
          }
        },
        "type": "photo",
        "url": "https://t.co/9j9ZkJdbHx"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {},
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}