🐦 Twitter Post Details

Viewing enriched Twitter post

@hardmaru

“Why AGI Will Not Happen” @Tim_Dettmers https://t.co/HBXAn0AJkp This essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: “Winner Takes All” approach of building the biggest thing vs a long-term focus on practicality. “The purpose of this blog post is to address what I see as very sloppy thinking, thinking that is created in an echo chamber, particularly in the Bay Area, where the same ideas amplify themselves without critical awareness. This amplification of bad ideas and thinking exuded by the rationalist and EA movements, is a big problem in shaping a beneficial future for everyone.” “A key problem with ideas, particularly those coming from the Bay Area, is that they often live entirely in the idea space. Most people who think about AGI, superintelligence, scaling laws, and hardware improvements treat these concepts as abstract ideas that can be discussed like philosophical thought experiments. In fact, a lot of the thinking about superintelligence and AGI comes from Oxford-style philosophy. Oxford, the birthplace of effective altruism, mixed with the rationality culture from the Bay Area, gave rise to a strong distortion of how to clearly think about certain ideas.”

Media 1

📊 Media Metadata

{
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000038674835128718/media_0.jpg?",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-14T03:45:20.559541",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2000038674835128718",
  "url": "https://x.com/hardmaru/status/2000038674835128718",
  "twitterUrl": "https://twitter.com/hardmaru/status/2000038674835128718",
  "text": "“Why AGI Will Not Happen” @Tim_Dettmers\n\nhttps://t.co/HBXAn0AJkp\n\nThis essay is worth reading. Discusses diminishing returns (and risks) of scaling. The contrast between West and East: “Winner Takes All” approach of building the biggest thing vs a long-term focus on practicality.\n\n“The purpose of this blog post is to address what I see as very sloppy thinking, thinking that is created in an echo chamber, particularly in the Bay Area, where the same ideas amplify themselves without critical awareness. This amplification of bad ideas and thinking exuded by the rationalist and EA movements, is a big problem in shaping a beneficial future for everyone.”\n\n“A key problem with ideas, particularly those coming from the Bay Area, is that they often live entirely in the idea space. Most people who think about AGI, superintelligence, scaling laws, and hardware improvements treat these concepts as abstract ideas that can be discussed like philosophical thought experiments. In fact, a lot of the thinking about superintelligence and AGI comes from Oxford-style philosophy. Oxford, the birthplace of effective altruism, mixed with the rationality culture from the Bay Area, gave rise to a strong distortion of how to clearly think about certain ideas.”",
  "source": "Twitter for iPhone",
  "retweetCount": 4,
  "replyCount": 0,
  "likeCount": 32,
  "quoteCount": 1,
  "viewCount": 4047,
  "createdAt": "Sun Dec 14 03:02:33 +0000 2025",
  "lang": "en",
  "bookmarkCount": 26,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2000038674835128718",
  "displayTextRange": [
    0,
    281
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "hardmaru",
    "url": "https://x.com/hardmaru",
    "twitterUrl": "https://twitter.com/hardmaru",
    "id": "2895499182",
    "name": "hardmaru",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1678402467078234113/XN5Oy2UP_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/2895499182/1429351923",
    "description": "",
    "location": "Minato-ku, Tokyo",
    "followers": 372185,
    "following": 1786,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Mon Nov 10 11:05:07 +0000 2014",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 144399,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 4444,
    "statusesCount": 25702,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1990204623471395284"
    ],
    "profile_bio": {
      "description": "Building @SakanaAILabs 🧠",
      "entities": {
        "description": {
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                9,
                22
              ],
              "name": "",
              "screen_name": "SakanaAILabs"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "sakana.ai",
              "expanded_url": "https://sakana.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/cVQF43wg32"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/uerSWWwqg0",
        "expanded_url": "https://twitter.com/hardmaru/status/2000038674835128718/photo/1",
        "ext_alt_text": "Why Scaling Is Not Enough\n\nI believe in scaling laws and I believe scaling will improve performance, and models like Gemini are clearly good models. The problem with scaling is this: for linear improvements, we previously had exponential growth as GPUs which canceled out the exponential resource requirements of scaling. This is no longer true. In other words, previously we invested roughly linear costs to get linear payoff, but now it has turned to exponential costs.\n\nFrontier AI Versus Economic Diffusion\n\nThe US and China follow two different approaches to AI. The US follows the idea that there will be one winner who takes it all – the one that builds superintelligence wins. Even coming short of superintelligence of AGI, if you have the best model, almost all people will use your model and not the competition’s model. The idea is: develop the biggest, badest model and people will come.\n\nChina’s philosophy is different. They believe model capabilities do not matter as much...",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "all": {
            "tags": [
              {
                "name": "Tim Dettmers",
                "screen_name": "Tim_Dettmers",
                "type": "user",
                "user_id": "872274950"
              }
            ]
          },
          "large": {},
          "orig": {}
        },
        "id_str": "2000038520145301504",
        "indices": [
          282,
          305
        ],
        "media_key": "3_2000038520145301504",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvBkG/6W9AACgACG8GQk/6XQY4AAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG8GQb/pb0AAKAAIbwZCT/pdBjgAA",
            "media_key": "3_2000038520145301504"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G8GQb_pb0AAH593.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 1007,
              "w": 1799,
              "x": 0,
              "y": 0
            },
            {
              "h": 1300,
              "w": 1300,
              "x": 0,
              "y": 0
            },
            {
              "h": 1300,
              "w": 1140,
              "x": 0,
              "y": 0
            },
            {
              "h": 1300,
              "w": 650,
              "x": 0,
              "y": 0
            },
            {
              "h": 1300,
              "w": 1799,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1300,
          "width": 1799
        },
        "sizes": {
          "large": {
            "h": 1300,
            "w": 1799
          }
        },
        "type": "photo",
        "url": "https://t.co/uerSWWwqg0"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "timdettmers.com/2025/12/10/why…",
        "expanded_url": "https://timdettmers.com/2025/12/10/why-agi-will-not-happen/",
        "indices": [
          41,
          64
        ],
        "url": "https://t.co/HBXAn0AJkp"
      }
    ],
    "user_mentions": [
      {
        "id_str": "872274950",
        "indices": [
          26,
          39
        ],
        "name": "Tim Dettmers",
        "screen_name": "Tim_Dettmers"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}