🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Reasoning models are expensive. Not because the models are huge. It's because they generate thousands of tokens just to think. But what if smaller models could learn to reason efficiently? This new paper compares training 12B models on reasoning traces from two frontier systems: - DeepSeek-R1 - gpt-oss (OpenAI's open-source reasoner) The key finding: gpt-oss traces produce 4x more efficient reasoning. DeepSeek-R1 averages ~15,500 tokens per response. gpt-oss averages ~3,500 tokens. Yet accuracy stays nearly identical across benchmarks. Verbose reasoning doesn't mean better reasoning. Why does this matter? Inference cost scales linearly with tokens. If your reasoning model generates 4x fewer tokens with the same accuracy, you cut costs by 75%. That's a massive efficiency gain. Interesting observation: Nemotron base models already had DeepSeek-R1 traces in pretraining. Training loss on DeepSeek traces started low and stayed flat. Training loss on gpt-oss traces started high and dropped gradually. They showed that the model was learning something new, which also means you can distill reasoning capabilities from frontier models into smaller systems. But the source matters. Different reasoning styles produce different efficiency profiles. (bookmark it) Paper: arxiv. org/abs/2511.19333

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1993695515595444366/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1993695515595444366/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-04T20:38:09.964436",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "1993695515595444366",
  "url": "https://x.com/omarsar0/status/1993695515595444366",
  "twitterUrl": "https://twitter.com/omarsar0/status/1993695515595444366",
  "text": "Reasoning models are expensive.\n\nNot because the models are huge.\n\nIt's because they generate thousands of tokens just to think.\n\nBut what if smaller models could learn to reason efficiently?\n\nThis new paper compares training 12B models on reasoning traces from two frontier systems:\n\n- DeepSeek-R1\n- gpt-oss (OpenAI's open-source reasoner)\n\nThe key finding: gpt-oss traces produce 4x more efficient reasoning. DeepSeek-R1 averages ~15,500 tokens per response. gpt-oss averages ~3,500 tokens.\n\nYet accuracy stays nearly identical across benchmarks. Verbose reasoning doesn't mean better reasoning.\n\nWhy does this matter?\n\nInference cost scales linearly with tokens. If your reasoning model generates 4x fewer tokens with the same accuracy, you cut costs by 75%.\n\nThat's a massive efficiency gain.\n\nInteresting observation: Nemotron base models already had DeepSeek-R1 traces in pretraining. Training loss on DeepSeek traces started low and stayed flat. Training loss on gpt-oss traces started high and dropped gradually.\n\nThey showed that the model was learning something new, which also means you can distill reasoning capabilities from frontier models into smaller systems. But the source matters. Different reasoning styles produce different efficiency profiles.\n\n(bookmark it)\n\nPaper: arxiv. org/abs/2511.19333",
  "source": "Twitter for iPhone",
  "retweetCount": 67,
  "replyCount": 18,
  "likeCount": 440,
  "quoteCount": 10,
  "viewCount": 22971,
  "createdAt": "Wed Nov 26 14:57:06 +0000 2025",
  "lang": "en",
  "bookmarkCount": 347,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1993695515595444366",
  "displayTextRange": [
    0,
    275
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "omarsar0",
    "url": "https://x.com/omarsar0",
    "twitterUrl": "https://twitter.com/omarsar0",
    "id": "3448284313",
    "name": "elvis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "description": "",
    "location": "DAIR.AI Academy",
    "followers": 277867,
    "following": 724,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 33719,
    "hasCustomTimelines": true,
    "isTranslator": true,
    "mediaCount": 4356,
    "statusesCount": 16656,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1996595107924263287"
    ],
    "profile_bio": {
      "description": "Building agents @dair_ai • Ex Meta AI, Elastic, PhD • Sharing research & insights on AI Agents • New cohort: https://t.co/tn8LKG5d20",
      "entities": {
        "description": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com/courses/claude…",
              "expanded_url": "https://dair-ai.thinkific.com/courses/claude-code",
              "indices": [
                109,
                132
              ],
              "url": "https://t.co/tn8LKG5d20"
            }
          ],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                16,
                24
              ],
              "name": "",
              "screen_name": "dair_ai"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com",
              "expanded_url": "https://dair-ai.thinkific.com/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/JBU5beHQNs"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/clTEBDkwwX",
        "expanded_url": "https://twitter.com/omarsar0/status/1993695515595444366/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "1993695511933792256",
        "indices": [
          276,
          299
        ],
        "media_key": "3_1993695511933792256",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARurB4FLGrAACgACG6sHgiVbEI4AAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG6sHgUsasAAKAAIbqweCJVsQjgAA",
            "media_key": "3_1993695511933792256"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G6sHgUsasAAkWI0.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 782,
              "w": 1396,
              "x": 0,
              "y": 0
            },
            {
              "h": 1396,
              "w": 1396,
              "x": 0,
              "y": 0
            },
            {
              "h": 1591,
              "w": 1396,
              "x": 0,
              "y": 0
            },
            {
              "h": 1788,
              "w": 894,
              "x": 223,
              "y": 0
            },
            {
              "h": 1788,
              "w": 1396,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1788,
          "width": 1396
        },
        "sizes": {
          "large": {
            "h": 1788,
            "w": 1396
          }
        },
        "type": "photo",
        "url": "https://t.co/clTEBDkwwX"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {},
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}