🐦 Twitter Post Details

Viewing enriched Twitter post

@gerardsans

@ChanPerco The general public is missing an important dimension to judge AI models: operational costs. Compute time is the silent variable that the press and benchmarks ignore. A model that spends 48h of inference to reach what another hits in seconds simply doesn’t show up in today’s data. Yet that’s exactly what would reveal whether the approach is economically viable. Claims of “recursive self-improvement” are mostly bounded optimization over fixed support. LLMs aren’t open-ended learners here: they’re function approximators resampling the same distribution. That alone locks in diminishing returns ≠ takeoff. Every agent loop or test-time compute burns tokens and FLOPs. Benchmarks show the wins. They rarely show the avg@k reality: how many background runs it actually took. Businesses don’t have unlimited VC capital to burn on tokens. The moment the subsidised token pot goes dry, the whole hotdog stand may crash and burn from its own operation. Real organisations optimise for return on spend, not “best output no matter the price.” Technical gains plateau fast while inference costs scale linearly with every extra loop. So you hit two ceilings at once: → Technical: diminishing returns from bounded optimisation → Economic: compounding costs for shrinking gains There’s no free compounding flywheel. You’re trading ever-more compute for incremental refinement, and that trade stops making sense long before takeoff. Your agentic AI workforce looks magically self-sustaining. Right up until the bill arrives and you are forced to close shop.

📊 Media Metadata

{
  "score": 0.42,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.12,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-04-17T01:09:03.334810",
  "import_source": "api_import",
  "source_tagged_at": "2026-04-17T01:09:03.334823",
  "enriched": true,
  "enriched_at": "2026-04-17T01:09:03.334826"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2044944840375382102",
  "url": "https://x.com/gerardsans/status/2044944840375382102",
  "twitterUrl": "https://twitter.com/gerardsans/status/2044944840375382102",
  "text": "@ChanPerco The general public is missing an important dimension to judge AI models: operational costs. \n\nCompute time is the silent variable that the press and benchmarks ignore.\n\nA model that spends 48h of inference to reach what another hits in seconds simply doesn’t show up in today’s data.\n\nYet that’s exactly what would reveal whether the approach is economically viable.\n\nClaims of “recursive self-improvement” are mostly bounded optimization over fixed support. LLMs aren’t open-ended learners here: they’re function approximators resampling the same distribution. That alone locks in diminishing returns ≠ takeoff.\n\nEvery agent loop or test-time compute burns tokens and FLOPs. Benchmarks show the wins. They rarely show the avg@k reality: how many background runs it actually took.\n\nBusinesses don’t have unlimited VC capital to burn on tokens. The moment the subsidised token pot goes dry, the whole hotdog stand may crash and burn from its own operation.\n\nReal organisations optimise for return on spend, not “best output no matter the price.” Technical gains plateau fast while inference costs scale linearly with every extra loop.\n\nSo you hit two ceilings at once:\n\n→ Technical: diminishing returns from bounded optimisation  \n→ Economic: compounding costs for shrinking gains\n\nThere’s no free compounding flywheel. You’re trading ever-more compute for incremental refinement, and that trade stops making sense long before takeoff.\n\nYour agentic AI workforce looks magically self-sustaining. Right up until the bill arrives and you are forced to close shop.",
  "source": "Twitter for iPhone",
  "retweetCount": 0,
  "replyCount": 0,
  "likeCount": 0,
  "quoteCount": 0,
  "viewCount": 2,
  "createdAt": "Fri Apr 17 01:03:37 +0000 2026",
  "lang": "en",
  "bookmarkCount": 0,
  "isReply": true,
  "inReplyToId": "2044871959020716268",
  "conversationId": "2044871959020716268",
  "displayTextRange": [
    11,
    288
  ],
  "inReplyToUserId": "48130604",
  "inReplyToUsername": "ChanPerco",
  "author": {
    "type": "user",
    "userName": "gerardsans",
    "url": "https://x.com/gerardsans",
    "twitterUrl": "https://twitter.com/gerardsans",
    "id": "9284062",
    "name": "Gerard Sans | Axiom 🇬🇧",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1938955632763105280/aBJaOCJJ_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/9284062/1751119206",
    "description": "",
    "location": "London ☔",
    "followers": 36076,
    "following": 6874,
    "status": "",
    "canDm": true,
    "canMediaTag": false,
    "createdAt": "Sat Oct 06 20:04:48 +0000 2007",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 26317,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 5141,
    "statusesCount": 39652,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1741153588959654329"
    ],
    "profile_bio": {
      "description": "Founder Axiom // Forging skills for the new era of AI. GDE in AI, Cloud & Angular. Building London's tech & art nexus @nextai_london. Speaker | MC | Trainer.",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                118,
                132
              ],
              "name": "",
              "screen_name": "nextai_london"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "aws.amazon.com/amplify",
              "expanded_url": "http://aws.amazon.com/amplify",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/ufepRUvlgW"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "communityInfo": null,
  "article": null
}