🐦 Twitter Post Details

Viewing enriched Twitter post

@awnihannun

The obvious reasons intelligence-per-watt is going up so fast: more efficient architectures, more efficient hardware, and higher quality data. The less obvious reason: finding the right balance on what should be stored in the model's weights and what can be computed through tool use, reasoning, and potentially other types of in-context learning. A simple example: in the earlier LLM days, it was quite likely that for simple arithmetic (e.g. adding two numbers), the model had to basically memorize tuples of (inputs, op, outputs). You can imagine this took up a lot of room in the weights. With reasoning the model can compute this in its chain-of-thought. With tool calling the model can compute this with a tool call. In both cases it saves a lot of space in the weights. I'm sure there is a floor on the smallest LLM that can have say GPT 5.x quality. But that floor could be 5B, it could be 100B. And I don't think anyone really knows because of the above effects. In other words we can probably go much further with a 5B-15B model with exceptional tool calling and reasoning.

📊 Media Metadata

{
  "score": 0.42,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.12,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-03-08T14:31:14.047122",
  "import_source": "api_import",
  "source_tagged_at": "2026-03-08T14:31:14.047132",
  "enriched": true,
  "enriched_at": "2026-03-08T14:31:14.047135"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2030411584619925605",
  "url": "https://x.com/awnihannun/status/2030411584619925605",
  "twitterUrl": "https://twitter.com/awnihannun/status/2030411584619925605",
  "text": "The obvious reasons intelligence-per-watt is going up so fast: more efficient architectures, more efficient hardware, and higher quality data.\n\nThe less obvious reason: finding the right balance on what should be stored in the model's weights and what can be computed through tool use, reasoning, and potentially other types of in-context learning.\n\nA simple example: in the earlier LLM days, it was quite likely that for simple arithmetic (e.g. adding two numbers), the model had to basically memorize tuples of (inputs, op, outputs). You can imagine this took up a lot of room in the weights.\n\nWith reasoning the model can compute this in its chain-of-thought. With tool calling the model can compute this with a tool call. In both cases it saves a lot of space in the weights.\n\nI'm sure there is a floor on the smallest LLM that can have say GPT 5.x quality. But that floor could be 5B, it could be 100B. And I don't think anyone really knows because of the above effects. \n\nIn other words we can probably go much further with a 5B-15B model with exceptional tool calling and reasoning.",
  "source": "Twitter for iPhone",
  "retweetCount": 32,
  "replyCount": 30,
  "likeCount": 305,
  "quoteCount": 3,
  "viewCount": 26261,
  "createdAt": "Sat Mar 07 22:33:39 +0000 2026",
  "lang": "en",
  "bookmarkCount": 84,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2030411584619925605",
  "displayTextRange": [
    0,
    280
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "awnihannun",
    "url": "https://x.com/awnihannun",
    "twitterUrl": "https://twitter.com/awnihannun",
    "id": "245262377",
    "name": "Awni Hannun",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/829414498893123584/P6JytwO8_normal.jpg",
    "coverPicture": "",
    "description": "",
    "location": "",
    "followers": 42741,
    "following": 334,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Mon Jan 31 08:05:27 +0000 2011",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 11048,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 871,
    "statusesCount": 4925,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {
      "description": "Prev: co-created MLX at Apple, trained neural nets at FAIR, Baidu, Stanford PhD",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": []
        },
        "url": {
          "urls": [
            {
              "display_url": "awnihannun.com",
              "expanded_url": "https://awnihannun.com/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/BnD3F0oqO4"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}