🐦 Twitter Post Details

Viewing enriched Twitter post

@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. https://t.co/YCvOwwjOzF Part code, part sci-fi, and a pinch of psychosis :)

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2030371219518931079/media_0.jpg",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2030371219518931079/media_0.jpg",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-03-08T14:19:26.905863",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2030371219518931079",
  "url": "https://x.com/karpathy/status/2030371219518931079",
  "twitterUrl": "https://twitter.com/karpathy/status/2030371219518931079",
  "text": "I packaged up the \"autoresearch\" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then:\n\n- the human iterates on the prompt (.md)\n- the AI agent iterates on the training code (.py)\n\nThe goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc.\n\nhttps://t.co/YCvOwwjOzF\nPart code, part sci-fi, and a pinch of psychosis :)",
  "source": "Twitter for iPhone",
  "retweetCount": 1726,
  "replyCount": 543,
  "likeCount": 14023,
  "quoteCount": 386,
  "viewCount": 2700996,
  "createdAt": "Sat Mar 07 19:53:15 +0000 2026",
  "lang": "en",
  "bookmarkCount": 17153,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2030371219518931079",
  "displayTextRange": [
    0,
    274
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "karpathy",
    "url": "https://x.com/karpathy",
    "twitterUrl": "https://twitter.com/karpathy",
    "id": "33836629",
    "name": "Andrej Karpathy",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1296667294148382721/9Pr6XrPB_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/33836629/1407117611",
    "description": "",
    "location": "Stanford",
    "followers": 1897505,
    "following": 1057,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Tue Apr 21 06:49:15 +0000 2009",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 22277,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 856,
    "statusesCount": 9999,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1617979122625712128"
    ],
    "profile_bio": {
      "description": "I like to train large deep neural nets. Previously Director of AI @ Tesla, founding team @ OpenAI, PhD @ Stanford.",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": []
        },
        "url": {
          "urls": [
            {
              "display_url": "karpathy.ai",
              "expanded_url": "https://karpathy.ai",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/0EcFthjJXM"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/3tyOq2P9c6",
        "expanded_url": "https://twitter.com/karpathy/status/2030371219518931079/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "id_str": "2030361240787423232",
        "indices": [
          275,
          298
        ],
        "media_key": "3_2030361240787423232",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARwtSsqK2xAACgACHC1T3eWaQIcAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABHC1KyorbEAAKAAIcLVPd5ZpAhwAA",
            "media_key": "3_2030361240787423232"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/HC1KyorbEAAoGWr.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 961,
              "w": 1716,
              "x": 0,
              "y": 337
            },
            {
              "h": 1298,
              "w": 1298,
              "x": 418,
              "y": 0
            },
            {
              "h": 1298,
              "w": 1139,
              "x": 577,
              "y": 0
            },
            {
              "h": 1298,
              "w": 649,
              "x": 1067,
              "y": 0
            },
            {
              "h": 1298,
              "w": 1716,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1298,
          "width": 1716
        },
        "sizes": {
          "large": {
            "h": 1298,
            "w": 1716
          }
        },
        "type": "photo",
        "url": "https://t.co/3tyOq2P9c6"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [
      {
        "display_url": "github.com/karpathy/autor…",
        "expanded_url": "https://github.com/karpathy/autoresearch",
        "indices": [
          907,
          930
        ],
        "url": "https://t.co/YCvOwwjOzF"
      }
    ],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}