🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

This new paper is wild! It suggests that LLM-based agents operate according to macroscopic physical laws, similar to how particles behave in thermodynamic systems. And it looks like it's a discovery that applies across models. LLM agents work really well on different domains, but we don't have a theory for why. The behavior of these systems is often viewed as a direct product of complex internal engineering: prompt templates, memory modules, and sophisticated tool calling. The dynamics remain a black box. This new research suggests that LLM-driven agents exhibit detailed balance, a fundamental property of equilibrium systems in physics. What does this mean? It suggests that LLMs don't just learn rule sets and strategies; they might be implicitly learning an underlying potential function that evaluates states globally, capturing something like "how far the LLM perceives a state to be from the goal." This enables directed convergence without getting stuck in repetitive cycles. The researchers embedded LLMs within agent frameworks and measured transition probabilities between states. Using a least action principle from physics, they estimated the potential function governing these transitions. The results across GPT-5 Nano, Claude-4, and Gemini-2.5-flash: state transitions largely satisfy the detailed balance condition. This indicates that their generative dynamics exhibit characteristics similar to equilibrium systems. In a symbolic fitting task with 50,228 state transitions across 7,484 different states, 69.56% of high-probability transitions moved toward lower potential. The potential function captured expression-level features like complexity and syntactic validity without needing string-level information. Different models showed different behaviors on the exploration-exploitation spectrum. Claude-4 and Gemini-2.5-flash converged rapidly to a few states. GPT-5 Nano explored widely, producing 645 different valid outputs in 20,000 generations. This might be the first discovery of a macroscopic physical law in LLM generative dynamics that doesn't depend on specific model details. It suggests we can study AI agents as physical systems with measurable, predictable properties rather than just engineering artifacts. Paper: https://t.co/UO1pMWxctY Learn to build effective AI Agents in our academy: https://t.co/JBU5beIoD0

Media 1
Media 2

📊 Media Metadata

{
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000626975296405525/media_0.png?",
      "filename": "media_0.png"
    },
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000626975296405525/media_1.png?",
      "filename": "media_1.png"
    }
  ],
  "processed_at": "2025-12-15T19:04:15.896313",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2000626975296405525",
  "url": "https://x.com/omarsar0/status/2000626975296405525",
  "twitterUrl": "https://twitter.com/omarsar0/status/2000626975296405525",
  "text": "This new paper is wild!\n\nIt suggests that LLM-based agents operate according to macroscopic physical laws, similar to how particles behave in thermodynamic systems.\n\nAnd it looks like it's a discovery that applies across models.\n\nLLM agents work really well on different domains, but we don't have a theory for why.\n\nThe behavior of these systems is often viewed as a direct product of complex internal engineering: prompt templates, memory modules, and sophisticated tool calling. The dynamics remain a black box.\n\nThis new research suggests that LLM-driven agents exhibit detailed balance, a fundamental property of equilibrium systems in physics.\n\nWhat does this mean?\n\nIt suggests that LLMs don't just learn rule sets and strategies; they might be implicitly learning an underlying potential function that evaluates states globally, capturing something like \"how far the LLM perceives a state to be from the goal.\" This enables directed convergence without getting stuck in repetitive cycles.\n\nThe researchers embedded LLMs within agent frameworks and measured transition probabilities between states. Using a least action principle from physics, they estimated the potential function governing these transitions.\n\nThe results across GPT-5 Nano, Claude-4, and Gemini-2.5-flash: state transitions largely satisfy the detailed balance condition. This indicates that their generative dynamics exhibit characteristics similar to equilibrium systems.\n\nIn a symbolic fitting task with 50,228 state transitions across 7,484 different states, 69.56% of high-probability transitions moved toward lower potential. The potential function captured expression-level features like complexity and syntactic validity without needing string-level information.\n\nDifferent models showed different behaviors on the exploration-exploitation spectrum. Claude-4 and Gemini-2.5-flash converged rapidly to a few states. GPT-5 Nano explored widely, producing 645 different valid outputs in 20,000 generations.\n\nThis might be the first discovery of a macroscopic physical law in LLM generative dynamics that doesn't depend on specific model details. It suggests we can study AI agents as physical systems with measurable, predictable properties rather than just engineering artifacts.\n\nPaper: https://t.co/UO1pMWxctY\n\nLearn to build effective AI Agents in our academy: https://t.co/JBU5beIoD0",
  "source": "Twitter for iPhone",
  "retweetCount": 0,
  "replyCount": 4,
  "likeCount": 8,
  "quoteCount": 0,
  "viewCount": 1494,
  "createdAt": "Mon Dec 15 18:00:15 +0000 2025",
  "lang": "en",
  "bookmarkCount": 15,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2000626975296405525",
  "displayTextRange": [
    0,
    280
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "omarsar0",
    "url": "https://x.com/omarsar0",
    "twitterUrl": "https://twitter.com/omarsar0",
    "id": "3448284313",
    "name": "elvis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "description": "",
    "location": "DAIR.AI Academy",
    "followers": 279335,
    "following": 735,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 33941,
    "hasCustomTimelines": true,
    "isTranslator": true,
    "mediaCount": 4382,
    "statusesCount": 16746,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2000385348413850055"
    ],
    "profile_bio": {
      "description": "Building @dair_ai • Prev: Meta AI, Elastic, PhD • New cohort: https://t.co/GZMhf39NRs",
      "entities": {
        "description": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com/courses/claude…",
              "expanded_url": "https://dair-ai.thinkific.com/courses/claude-code-for-everyone-2",
              "indices": [
                62,
                85
              ],
              "url": "https://t.co/GZMhf39NRs"
            }
          ],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                9,
                17
              ],
              "name": "",
              "screen_name": "dair_ai"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "dair.ai",
              "expanded_url": "https://www.dair.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/XQto5ypkSM"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/8uETdkApKJ",
        "expanded_url": "https://twitter.com/omarsar0/status/2000626975296405525/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "h": 73,
                "w": 73,
                "x": 207,
                "y": 1508
              },
              {
                "h": 132,
                "w": 132,
                "x": 551,
                "y": 65
              },
              {
                "h": 133,
                "w": 133,
                "x": 1002,
                "y": 61
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "h": 73,
                "w": 73,
                "x": 207,
                "y": 1508
              },
              {
                "h": 132,
                "w": 132,
                "x": 551,
                "y": 65
              },
              {
                "h": 133,
                "w": 133,
                "x": 1002,
                "y": 61
              }
            ]
          }
        },
        "id_str": "2000626971034935296",
        "indices": [
          281,
          304
        ],
        "media_key": "3_2000626971034935296",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvDp6FgmmAACgACG8Onol6bQBUAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG8OnoWCaYAAKAAIbw6eiXptAFQAA",
            "media_key": "3_2000626971034935296"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G8OnoWCaYAAY_NK.png",
        "original_info": {
          "focus_rects": [
            {
              "h": 877,
              "w": 1566,
              "x": 0,
              "y": 0
            },
            {
              "h": 1566,
              "w": 1566,
              "x": 0,
              "y": 0
            },
            {
              "h": 1785,
              "w": 1566,
              "x": 0,
              "y": 0
            },
            {
              "h": 1804,
              "w": 902,
              "x": 664,
              "y": 0
            },
            {
              "h": 1804,
              "w": 1566,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1804,
          "width": 1566
        },
        "sizes": {
          "large": {
            "h": 1804,
            "w": 1566
          }
        },
        "type": "photo",
        "url": "https://t.co/8uETdkApKJ"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "arxiv.org/abs/2512.10047",
        "expanded_url": "https://arxiv.org/abs/2512.10047",
        "indices": [
          2270,
          2293
        ],
        "url": "https://t.co/UO1pMWxctY"
      },
      {
        "display_url": "dair-ai.thinkific.com",
        "expanded_url": "https://dair-ai.thinkific.com/",
        "indices": [
          2346,
          2369
        ],
        "url": "https://t.co/JBU5beIoD0"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}