🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

We are witnessing an incredible level of efficiency in reasoning models. Faster and more efficient reasoning models are on the rise. First, GPT-5 (and GPT-5-Codex) with remarkably efficient token use, and now Gemini 2.5 Deep Think, achieving gold-medal level performance at the ICPC 2025 under the same five-hour time constraint. Gemini 2.5 Deep Think correctly solved 10 out of 12 real-world coding problems. It would be ranked in 2nd place overall if compared with the university teams in the competition. As shown in the chart, Gemini’s time is in blue, and the fastest university team’s time is shown in gray. This is not an accident; this is what these companies are massively optimizing for right now. There is a quiet race for the fastest, smartest, and most efficient reasoning models. Advances are happening across pre-training, post-training, novel RL techniques, intelligent routing, long-horizon capabilities, scalable and effective tool use, multi-step reasoning, and parallel thinking, just to name a few. All these advancements are leading to reasoning models that respond faster on easy tasks and think for longer and efficiently on harder tasks. All while improving performance and capabilities across the board. It's important that mode switching happens dynamically because not every problem, state, and subtask demands the same level of compute. This is just the beginning, but do expect companies like Google and OpenAI to keep innovating on model efficiency. This is good news for us AI engineers who build or use complex agentic workflows. Having access to faster and more efficient reasoning models scales productivity and application of intelligence across domains, unlike anything we have seen.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1968378996573487699/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1968378996573487699/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-09-18T13:50:15.487141",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "1968378996573487699",
  "url": "https://x.com/omarsar0/status/1968378996573487699",
  "twitterUrl": "https://twitter.com/omarsar0/status/1968378996573487699",
  "text": "We are witnessing an incredible level of efficiency in reasoning models.\n\nFaster and more efficient reasoning models are on the rise. \n\nFirst, GPT-5 (and GPT-5-Codex) with remarkably efficient token use, and now Gemini 2.5 Deep Think, achieving gold-medal level performance at the ICPC 2025 under the same five-hour time constraint.\n\nGemini 2.5 Deep Think correctly solved 10 out of 12 real-world coding problems. It would be ranked in 2nd place overall if compared with the university teams in the competition. \n\nAs shown in the chart, Gemini’s time is in blue, and the fastest university team’s time is shown in gray. This is not an accident; this is what these companies are massively optimizing for right now. There is a quiet race for the fastest, smartest, and most efficient reasoning models. \n\nAdvances are happening across pre-training, post-training, novel RL techniques, intelligent routing, long-horizon capabilities, scalable and effective tool use, multi-step reasoning, and parallel thinking, just to name a few.\n\nAll these advancements are leading to reasoning models that respond faster on easy tasks and think for longer and efficiently on harder tasks. All while improving performance and capabilities across the board. It's important that mode switching happens dynamically because not every problem, state, and subtask demands the same level of compute. \n\nThis is just the beginning, but do expect companies like Google and OpenAI to keep innovating on model efficiency. This is good news for us AI engineers who build or use complex agentic workflows. Having access to faster and more efficient reasoning models scales productivity and application of intelligence across domains, unlike anything we have seen.",
  "source": "Twitter for iPhone",
  "retweetCount": 15,
  "replyCount": 5,
  "likeCount": 138,
  "quoteCount": 3,
  "viewCount": 35156,
  "createdAt": "Wed Sep 17 18:18:18 +0000 2025",
  "lang": "en",
  "bookmarkCount": 72,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1968378996573487699",
  "displayTextRange": [
    0,
    281
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "omarsar0",
    "url": "https://x.com/omarsar0",
    "twitterUrl": "https://twitter.com/omarsar0",
    "id": "3448284313",
    "name": "elvis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "description": "",
    "location": "",
    "followers": 264604,
    "following": 674,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 32378,
    "hasCustomTimelines": true,
    "isTranslator": true,
    "mediaCount": 4080,
    "statusesCount": 15847,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1968438129846567163"
    ],
    "profile_bio": {
      "description": "Building with AI agents @dair_ai • Prev: Meta AI, Galactica LLM, Elastic, PaperswithCode, PhD • I share insights on how to build with AI Agents ↓",
      "entities": {
        "description": {
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                24,
                32
              ],
              "name": "",
              "screen_name": "dair_ai"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com",
              "expanded_url": "https://dair-ai.thinkific.com/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/JBU5beHQNs"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/PP1hQzLHez",
        "expanded_url": "https://twitter.com/omarsar0/status/1968378996573487699/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "h": 78,
                "w": 78,
                "x": 615,
                "y": 424
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "h": 78,
                "w": 78,
                "x": 615,
                "y": 424
              }
            ]
          }
        },
        "id_str": "1968366702644920320",
        "indices": [
          282,
          305
        ],
        "media_key": "3_1968366702644920320",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARtRCxZS24AACgACG1EWRLoaQlMAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG1ELFlLbgAAKAAIbURZEuhpCUwAA",
            "media_key": "3_1968366702644920320"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G1ELFlLbgAA2ttp.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 599,
              "w": 1070,
              "x": 0,
              "y": 28
            },
            {
              "h": 627,
              "w": 627,
              "x": 0,
              "y": 0
            },
            {
              "h": 627,
              "w": 550,
              "x": 18,
              "y": 0
            },
            {
              "h": 627,
              "w": 314,
              "x": 136,
              "y": 0
            },
            {
              "h": 627,
              "w": 1070,
              "x": 0,
              "y": 0
            }
          ],
          "height": 627,
          "width": 1070
        },
        "sizes": {
          "large": {
            "h": 627,
            "w": 1070
          }
        },
        "type": "photo",
        "url": "https://t.co/PP1hQzLHez"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {},
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}