🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Be careful what you put in your AGENTS dot md files. This new research evaluates AGENTS dot md files for coding agents. Everyone uses these context files in their repos to help AI coding agents. More context should mean better performance, right? Not quite. This study tested Claude Code (Sonnet-4.5), Codex (GPT-5.2/5.1 mini), and Qwen Code across SWE-bench and a new benchmark called AGENTbench with 138 real-world instances. LLM-generated context files actually decreased task success rates by 0.5-2% while increasing inference costs by over 20%. Agents followed the instructions, using the mentioned tools 1.6-2.5x more often, but that instruction-following paradoxically hurt performance and required 22% more reasoning tokens. Developer-written context files performed better, improving success by about 4%, but still came with higher costs and additional steps per task. The broader pattern is that context files encourage more exploration without helping agents locate relevant files any faster. They largely duplicate what already exists in repo documentation. The recommendation is clear. Omit LLM-generated context files entirely. Keep developer-written ones minimal and focused on task-specific requirements rather than comprehensive overviews. I featured a paper last week that showed that LLM-generated Skills also don't work so well. Self-improving agents are exciting, but be careful of context rot and of unnecessarily overloading your context window. Paper: https://t.co/agxvRbW26N Learn to build effective AI agents in our academy: https://t.co/1e8RZKrwFp

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2026306141181898887/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2026306141181898887/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-03-01T19:16:10.006823",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2026306141181898887",
  "url": "https://x.com/omarsar0/status/2026306141181898887",
  "twitterUrl": "https://twitter.com/omarsar0/status/2026306141181898887",
  "text": "Be careful what you put in your AGENTS dot md files.\n\nThis new research evaluates AGENTS dot md files for coding agents.\n\nEveryone uses these context files in their repos to help AI coding agents. More context should mean better performance, right?\n\nNot quite. This study tested Claude Code (Sonnet-4.5), Codex (GPT-5.2/5.1 mini), and Qwen Code across SWE-bench and a new benchmark called AGENTbench with 138 real-world instances.\n\nLLM-generated context files actually decreased task success rates by 0.5-2% while increasing inference costs by over 20%.\n\nAgents followed the instructions, using the mentioned tools 1.6-2.5x more often, but that instruction-following paradoxically hurt performance and required 22% more reasoning tokens.\n\nDeveloper-written context files performed better, improving success by about 4%, but still came with higher costs and additional steps per task. The broader pattern is that context files encourage more exploration without helping agents locate relevant files any faster. They largely duplicate what already exists in repo documentation.\n\nThe recommendation is clear. Omit LLM-generated context files entirely. Keep developer-written ones minimal and focused on task-specific requirements rather than comprehensive overviews.\n\nI featured a paper last week that showed that LLM-generated Skills also don't work so well. Self-improving agents are exciting, but be careful of context rot and of unnecessarily overloading your context window.\n\nPaper: https://t.co/agxvRbW26N\n\nLearn to build effective AI agents in our academy: https://t.co/1e8RZKrwFp",
  "source": "Twitter for iPhone",
  "retweetCount": 66,
  "replyCount": 56,
  "likeCount": 396,
  "quoteCount": 13,
  "viewCount": 67103,
  "createdAt": "Tue Feb 24 14:40:05 +0000 2026",
  "lang": "en",
  "bookmarkCount": 462,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2026306141181898887",
  "displayTextRange": [
    0,
    278
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "omarsar0",
    "url": "https://x.com/omarsar0",
    "twitterUrl": "https://twitter.com/omarsar0",
    "id": "3448284313",
    "name": "elvis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "description": "",
    "location": "DAIR.AI Academy",
    "followers": 291571,
    "following": 776,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 34909,
    "hasCustomTimelines": true,
    "isTranslator": true,
    "mediaCount": 4525,
    "statusesCount": 17379,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2028103978190590118"
    ],
    "profile_bio": {
      "description": "Building @dair_ai • Prev: Meta AI, Elastic, PhD • New AI learning portal: https://t.co/1e8RZKs4uX",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [
            {
              "display_url": "academy.dair.ai",
              "expanded_url": "https://academy.dair.ai/",
              "indices": [
                74,
                97
              ],
              "url": "https://t.co/1e8RZKs4uX"
            }
          ],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                9,
                17
              ],
              "name": "",
              "screen_name": "dair_ai"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "dair.ai",
              "expanded_url": "https://www.dair.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/XQto5ypSIk"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/efCt03t70r",
        "expanded_url": "https://twitter.com/omarsar0/status/2026306141181898887/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "id_str": "2026306113319178240",
        "indices": [
          279,
          302
        ],
        "media_key": "3_2026306113319178240",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARwe4qyal5AACgACHB7isxdW8IcAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABHB7irJqXkAAKAAIcHuKzF1bwhwAA",
            "media_key": "3_2026306113319178240"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/HB7irJqXkAABgyK.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 898,
              "w": 1604,
              "x": 0,
              "y": 0
            },
            {
              "h": 1604,
              "w": 1604,
              "x": 0,
              "y": 0
            },
            {
              "h": 1784,
              "w": 1565,
              "x": 0,
              "y": 0
            },
            {
              "h": 1784,
              "w": 892,
              "x": 311,
              "y": 0
            },
            {
              "h": 1784,
              "w": 1604,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1784,
          "width": 1604
        },
        "sizes": {
          "large": {
            "h": 1784,
            "w": 1604
          }
        },
        "type": "photo",
        "url": "https://t.co/efCt03t70r"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [
      {
        "display_url": "arxiv.org/abs/2602.11988",
        "expanded_url": "https://arxiv.org/abs/2602.11988",
        "indices": [
          1485,
          1508
        ],
        "url": "https://t.co/agxvRbW26N"
      },
      {
        "display_url": "academy.dair.ai",
        "expanded_url": "https://academy.dair.ai/",
        "indices": [
          1561,
          1584
        ],
        "url": "https://t.co/1e8RZKrwFp"
      }
    ],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}