🐦 Twitter Post Details

Viewing enriched Twitter post

@LiorOnAI

Mercury 2 doesn't just make reasoning models faster. It makes them native. Every reasoning model today is built on autoregressive generation, where the model writes one word at a time, left to right, like typing on a keyboard. Each word waits for the previous one to finish. The problem compounds when reasoning depth increases: multi-step agents, voice systems, and coding assistants all need many sequential passes, and each pass multiplies the delay. The industry has spent billions on chips, compression, and serving infrastructure to squeeze more speed from this sequential loop. But you're still optimizing a bottleneck. Mercury 2 uses diffusion instead. It starts with a rough draft of the entire response and refines all the words simultaneously through multiple passes. Each pass improves many tokens in parallel, so one neural network evaluation does far more work. The model can also correct mistakes mid-generation because nothing is locked in until the final pass. This isn't a serving trick or a hardware optimization. The speed comes from the architecture itself. This unlocks workflows that were impractical before: 1. Multi-step agents that run 10+ reasoning loops without compounding latency 2. Voice AI that hits sub-200ms response times with full reasoning enabled 3. Real-time code editors where every keystroke triggers model feedback Mercury 2 runs at 1,000 tokens per second while matching the quality of models that generate 70-90 tokens per second. If this performance holds across model sizes, reasoning stops being a batch process you run overnight and becomes something you embed everywhere. Agent loops become tight enough for interactive debugging. Voice systems feel instant instead of sluggish. Code assistants respond faster than you can move your cursor. The entire category of "too slow for production" collapses.

📊 Media Metadata

{
  "score": 0.42,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.12,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-03-01T12:17:39.020443",
  "import_source": "api_import",
  "source_tagged_at": "2026-03-01T12:17:39.020457",
  "enriched": true,
  "enriched_at": "2026-03-01T12:17:39.020459"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2026376138428395908",
  "url": "https://x.com/LiorOnAI/status/2026376138428395908",
  "twitterUrl": "https://twitter.com/LiorOnAI/status/2026376138428395908",
  "text": "Mercury 2 doesn't just make reasoning models faster. It makes them native.\n\nEvery reasoning model today is built on autoregressive generation, where the model writes one word at a time, left to right, like typing on a keyboard. \n\nEach word waits for the previous one to finish. \n\nThe problem compounds when reasoning depth increases: multi-step agents, voice systems, and coding assistants all need many sequential passes, and each pass multiplies the delay. \n\nThe industry has spent billions on chips, compression, and serving infrastructure to squeeze more speed from this sequential loop. But you're still optimizing a bottleneck.\n\nMercury 2 uses diffusion instead. It starts with a rough draft of the entire response and refines all the words simultaneously through multiple passes. \n\nEach pass improves many tokens in parallel, so one neural network evaluation does far more work. The model can also correct mistakes mid-generation because nothing is locked in until the final pass. \n\nThis isn't a serving trick or a hardware optimization. The speed comes from the architecture itself.\n\nThis unlocks workflows that were impractical before:\n\n1. Multi-step agents that run 10+ reasoning loops without compounding latency\n\n2. Voice AI that hits sub-200ms response times with full reasoning enabled\n\n3. Real-time code editors where every keystroke triggers model feedback\n\nMercury 2 runs at 1,000 tokens per second while matching the quality of models that generate 70-90 tokens per second. \n\nIf this performance holds across model sizes, reasoning stops being a batch process you run overnight and becomes something you embed everywhere. \n\nAgent loops become tight enough for interactive debugging. Voice systems feel instant instead of sluggish. Code assistants respond faster than you can move your cursor. \n\nThe entire category of \"too slow for production\" collapses.",
  "source": "Twitter for iPhone",
  "retweetCount": 4,
  "replyCount": 9,
  "likeCount": 49,
  "quoteCount": 0,
  "viewCount": 6704,
  "createdAt": "Tue Feb 24 19:18:14 +0000 2026",
  "lang": "en",
  "bookmarkCount": 12,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2026376138428395908",
  "displayTextRange": [
    0,
    277
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "LiorOnAI",
    "url": "https://x.com/LiorOnAI",
    "twitterUrl": "https://twitter.com/LiorOnAI",
    "id": "931470139",
    "name": "Lior Alexander",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/2027106343283527680/lh729xEs_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/931470139/1761077189",
    "description": "",
    "location": "",
    "followers": 112932,
    "following": 2153,
    "status": "",
    "canDm": true,
    "canMediaTag": false,
    "createdAt": "Wed Nov 07 07:19:36 +0000 2012",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 6770,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 661,
    "statusesCount": 3756,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {
      "description": "Covering the latest news for AI devs • Founder @AlphaSignalAI (270k users) •  ML Eng since 2017 • Ex-Mila • MIT",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                47,
                61
              ],
              "name": "",
              "screen_name": "AlphaSignalAI"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "alphasignal.ai",
              "expanded_url": "https://alphasignal.ai",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/AyubevadmD"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": {
    "type": "tweet",
    "id": "2026297527843409933",
    "url": "https://x.com/_inception_ai/status/2026297527843409933",
    "twitterUrl": "https://twitter.com/_inception_ai/status/2026297527843409933",
    "text": "Mercury 2 is live.\n\nThe world's first reasoning diffusion LLM – 5x faster than leading speed-optimized autoregressive models.\n\nBuilt for production: multi-step agents without delays, voice AI with tight latency budgets, instant coding feedback.\n\nDiffusion-based generation enables parallel refinement, not sequential tokens. Faster. More controllable. Dramatically lower inference cost.\n\nAvailable today on the Inception API.\n\n@dinabass has the story in @business.",
    "source": "Twitter for iPhone",
    "retweetCount": 61,
    "replyCount": 30,
    "likeCount": 427,
    "quoteCount": 23,
    "viewCount": 193484,
    "createdAt": "Tue Feb 24 14:05:52 +0000 2026",
    "lang": "en",
    "bookmarkCount": 211,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "2026297527843409933",
    "displayTextRange": [
      0,
      280
    ],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {
      "type": "user",
      "userName": "_inception_ai",
      "url": "https://x.com/_inception_ai",
      "twitterUrl": "https://twitter.com/_inception_ai",
      "id": "1894161655728410630",
      "name": "Inception",
      "isVerified": false,
      "isBlueVerified": true,
      "verifiedType": "Business",
      "profilePicture": "https://pbs.twimg.com/profile_images/2026172318377242624/4c27IKX8_normal.jpg",
      "coverPicture": "https://pbs.twimg.com/profile_banners/1894161655728410630/1771907644",
      "description": "",
      "location": "",
      "followers": 16425,
      "following": 10,
      "status": "",
      "canDm": false,
      "canMediaTag": true,
      "createdAt": "Mon Feb 24 23:05:22 +0000 2025",
      "entities": {
        "description": {
          "urls": []
        },
        "url": {}
      },
      "fastFollowersCount": 0,
      "favouritesCount": 110,
      "hasCustomTimelines": true,
      "isTranslator": false,
      "mediaCount": 38,
      "statusesCount": 166,
      "withheldInCountries": [],
      "affiliatesHighlightedLabel": {},
      "possiblySensitive": false,
      "pinnedTweetIds": [
        "2026341782716657749"
      ],
      "profile_bio": {
        "description": "Pioneering a new generation of LLMs.",
        "entities": {
          "description": {
            "hashtags": [],
            "symbols": [],
            "urls": [],
            "user_mentions": []
          },
          "url": {
            "urls": [
              {
                "display_url": "inceptionlabs.ai",
                "expanded_url": "https://www.inceptionlabs.ai/",
                "indices": [
                  0,
                  23
                ],
                "url": "https://t.co/dWl5yDy6pi"
              }
            ]
          }
        }
      },
      "isAutomated": false,
      "automatedBy": null
    },
    "extendedEntities": {
      "media": [
        {
          "allow_download_status": {
            "allow_download": true
          },
          "display_url": "pic.twitter.com/pvrQWeeWid",
          "expanded_url": "https://twitter.com/_inception_ai/status/2026297527843409933/photo/1",
          "ext_media_availability": {
            "status": "Available"
          },
          "features": {
            "large": {
              "faces": []
            },
            "orig": {
              "faces": []
            }
          },
          "id_str": "2026297176754884608",
          "indices": [
            281,
            304
          ],
          "media_key": "3_2026297176754884608",
          "media_results": {
            "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARwe2ovl2hAACgACHB7a3aRb0A0AAA==",
            "result": {
              "__typename": "ApiMedia",
              "id": "QXBpTWVkaWE6DAABCgABHB7ai+XaEAAKAAIcHtrdpFvQDQAA",
              "media_key": "3_2026297176754884608"
            }
          },
          "media_url_https": "https://pbs.twimg.com/media/HB7ai-XaEAArl1t.jpg",
          "original_info": {
            "focus_rects": [
              {
                "h": 968,
                "w": 1728,
                "x": 0,
                "y": 0
              },
              {
                "h": 1728,
                "w": 1728,
                "x": 0,
                "y": 0
              },
              {
                "h": 1970,
                "w": 1728,
                "x": 0,
                "y": 0
              },
              {
                "h": 2304,
                "w": 1152,
                "x": 0,
                "y": 0
              },
              {
                "h": 2304,
                "w": 1728,
                "x": 0,
                "y": 0
              }
            ],
            "height": 2304,
            "width": 1728
          },
          "sizes": {
            "large": {
              "h": 2048,
              "w": 1536
            }
          },
          "type": "photo",
          "url": "https://t.co/pvrQWeeWid"
        }
      ]
    },
    "card": null,
    "place": {},
    "entities": {
      "hashtags": [],
      "symbols": [],
      "urls": [],
      "user_mentions": [
        {
          "id_str": "6642152",
          "indices": [
            427,
            436
          ],
          "name": "Dina Bass",
          "screen_name": "dinabass"
        },
        {
          "id_str": "34713362",
          "indices": [
            454,
            463
          ],
          "name": "Bloomberg",
          "screen_name": "business"
        }
      ]
    },
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "isLimitedReply": false,
    "article": null
  },
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}