🐦 Twitter Post Details

Viewing enriched Twitter post

@mattshumer_

I've been testing GPT-5.4 for the last week. In short, it is the best model in the world, by far. It's so good that it's the first model that makes the “which model should I use?” conversation feel almost over. The biggest surprise: I barely use Pro anymore! If you know me, you know I'm a Pro addict. I reach for Pro models constantly, and use them for almost everything, as they just... nail almost anything I give to them. For the first time, 5.4's standard version, with heavy thinking, just broke that habit. Even in standard mode, GPT-5.4 is better than previous models in Pro mode... crazy! Coding capabilities are ridiculous... it's essentially flawless. Inside Codex, it's insanely reliable. Coding is essentially solved. There's not much more to say on this, it's just THAT good. The Pro version is near-perfect. Other testers I spoke with saw it solving problems that were unsolvable by any other model. At this point, Pro is overkill for almost every normal use-case, but when you really need the power to do something extremely difficult, it's incredible. Consistent with everything I've said above, even the standard thinking version uses fewer reasoning tokens than previous models to get the same level of results. In practice, this means you get great results much faster than before. This was one of my biggest gripes with previous OpenAI models. They just took too long to complete simple tasks. Assuming the speed we had during testing holds up as more users join, this is going to be a big win for OpenAI. It still has weaknesses, though: - Frontend taste is FAR behind Opus 4.6 and Gemini 3.1 Pro. , why is this so hard to fix? @OpenAI once you fix this, there's literally no reason for me to use any other model. Please please please do it! - It can still miss obvious real-world context. For example, I had it plan an itinerary for a trip. At first glance, it looked perfect, but it failed to take into account that it chose locations that would be mobbed by spring breakers, so I had to re-run the prompt from scratch with more context. - When testing it inside OpenClaw, it kept stopping short before finishing tasks. I'm assuming this will be fixed quickly, but it's still worth noting. But zooming out: This thing is so far ahead overall that the nitpicks are starting to feel beside the point. GPT-5.4 is a serious fucking model. The best model in the world. By far.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2029620518249508950/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2029620518249508950/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-03-06T14:19:50.052044",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2029620518249508950",
  "url": "https://x.com/mattshumer_/status/2029620518249508950",
  "twitterUrl": "https://twitter.com/mattshumer_/status/2029620518249508950",
  "text": "I've been testing GPT-5.4 for the last week.\n\nIn short, it is the best model in the world, by far.\n\nIt's so good that it's the first model that makes the “which model should I use?” conversation feel almost over.\n\nThe biggest surprise: I barely use Pro anymore!\n\nIf you know me, you know I'm a Pro addict. I reach for Pro models constantly, and use them for almost everything, as they just... nail almost anything I give to them.\n\nFor the first time, 5.4's standard version, with heavy thinking, just broke that habit.\n\nEven in standard mode, GPT-5.4 is better than previous models in Pro mode... crazy!\n\nCoding capabilities are ridiculous... it's essentially flawless. Inside Codex, it's insanely reliable. Coding is essentially solved. There's not much more to say on this, it's just THAT good.\n\nThe Pro version is near-perfect. Other testers I spoke with saw it solving problems that were unsolvable by any other model. At this point, Pro is overkill for almost every normal use-case, but when you really need the power to do something extremely difficult, it's incredible.\n\nConsistent with everything I've said above, even the standard thinking version uses fewer reasoning tokens than previous models to get the same level of results. In practice, this means you get great results much faster than before. This was one of my biggest gripes with previous OpenAI models. They just took too long to complete simple tasks. Assuming the speed we had during testing holds up as more users join, this is going to be a big win for OpenAI.\n\nIt still has weaknesses, though:\n\n- Frontend taste is FAR behind Opus 4.6 and Gemini 3.1 Pro. , why is this so hard to fix? @OpenAI once you fix this, there's literally no reason for me to use any other model. Please please please do it!\n\n- It can still miss obvious real-world context. For example, I had it plan an itinerary for a trip. At first glance, it looked perfect, but it failed to take into account that it chose locations that would be mobbed by spring breakers, so I had to re-run the prompt from scratch with more context.\n\n- When testing it inside OpenClaw, it kept stopping short before finishing tasks. I'm assuming this will be fixed quickly, but it's still worth noting. \n\nBut zooming out:\n\nThis thing is so far ahead overall that the nitpicks are starting to feel beside the point.\n\nGPT-5.4 is a serious fucking model.\n\nThe best model in the world.\n\nBy far.",
  "source": "Twitter for iPhone",
  "retweetCount": 204,
  "replyCount": 289,
  "likeCount": 2786,
  "quoteCount": 91,
  "viewCount": 1255454,
  "createdAt": "Thu Mar 05 18:10:14 +0000 2026",
  "lang": "en",
  "bookmarkCount": 874,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2029620518249508950",
  "displayTextRange": [
    0,
    278
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "mattshumer_",
    "url": "https://x.com/mattshumer_",
    "twitterUrl": "https://twitter.com/mattshumer_",
    "id": "1194889317388374016",
    "name": "Matt Shumer",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1490950574090571778/BtgOaqUP_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1194889317388374016/1605142094",
    "description": "",
    "location": "NYC",
    "followers": 355768,
    "following": 1573,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Thu Nov 14 08:06:43 +0000 2019",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 10036,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 875,
    "statusesCount": 10032,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2021256989876109403"
    ],
    "profile_bio": {
      "description": "CEO @HyperWriteAI, @OthersideAI, investing via Shumer Capital in @GroqInc @Etched @Rork @DaytonaIO @OpenRouter + more\n\nPress: mattshumermedia@gmail.com",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                4,
                17
              ],
              "name": "",
              "screen_name": "HyperWriteAI"
            },
            {
              "id_str": "0",
              "indices": [
                19,
                31
              ],
              "name": "",
              "screen_name": "OthersideAI"
            },
            {
              "id_str": "0",
              "indices": [
                65,
                73
              ],
              "name": "",
              "screen_name": "GroqInc"
            },
            {
              "id_str": "0",
              "indices": [
                74,
                81
              ],
              "name": "",
              "screen_name": "Etched"
            },
            {
              "id_str": "0",
              "indices": [
                82,
                87
              ],
              "name": "",
              "screen_name": "Rork"
            },
            {
              "id_str": "0",
              "indices": [
                88,
                98
              ],
              "name": "",
              "screen_name": "DaytonaIO"
            },
            {
              "id_str": "0",
              "indices": [
                99,
                110
              ],
              "name": "",
              "screen_name": "OpenRouter"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "shumer.dev/about",
              "expanded_url": "https://shumer.dev/about",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/Kh3hzlppcM"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/9ChoYf17cr",
        "expanded_url": "https://twitter.com/mattshumer_/status/2029620518249508950/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "id_str": "2029617504457306112",
        "indices": [
          279,
          302
        ],
        "media_key": "3_2029617504457306112",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARwqpl3vF7AACgACHCqpG6MbMFYAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABHCqmXe8XsAAKAAIcKqkboxswVgAA",
            "media_key": "3_2029617504457306112"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/HCqmXe8XsAA2DFQ.png",
        "original_info": {
          "focus_rects": [
            {
              "h": 2150,
              "w": 3840,
              "x": 0,
              "y": 10
            },
            {
              "h": 2160,
              "w": 2160,
              "x": 168,
              "y": 0
            },
            {
              "h": 2160,
              "w": 1895,
              "x": 301,
              "y": 0
            },
            {
              "h": 2160,
              "w": 1080,
              "x": 708,
              "y": 0
            },
            {
              "h": 2160,
              "w": 3840,
              "x": 0,
              "y": 0
            }
          ],
          "height": 2160,
          "width": 3840
        },
        "sizes": {
          "large": {
            "h": 1152,
            "w": 2048
          }
        },
        "type": "photo",
        "url": "https://t.co/9ChoYf17cr"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": [
      {
        "id_str": "4398626122",
        "indices": [
          1661,
          1668
        ],
        "name": "OpenAI",
        "screen_name": "OpenAI"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}