🐦 Twitter Post Details

Viewing enriched Twitter post

@hardmaru

Instead of forcing models to hold everything in an active context window, we can use hypernetworks to instantly compile documents and tasks directly into the model's weights. A step towards giving language models durable memory and fast adaptation. Blog: https://t.co/iHoifpsLMu

📊 Media Metadata

{
  "score": 0.42,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.12000000000000002,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-03-01T12:19:35.525684",
  "import_source": "api_import",
  "source_tagged_at": "2026-03-01T12:19:35.525699",
  "enriched": true,
  "enriched_at": "2026-03-01T12:19:35.525702"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2027240562898976770",
  "url": "https://x.com/hardmaru/status/2027240562898976770",
  "twitterUrl": "https://twitter.com/hardmaru/status/2027240562898976770",
  "text": "Instead of forcing models to hold everything in an active context window, we can use hypernetworks to instantly compile documents and tasks directly into the model's weights. A step towards giving language models durable memory and fast adaptation.\n\nBlog: https://t.co/iHoifpsLMu",
  "source": "Twitter for iPhone",
  "retweetCount": 229,
  "replyCount": 61,
  "likeCount": 2505,
  "quoteCount": 36,
  "viewCount": 290558,
  "createdAt": "Fri Feb 27 04:33:09 +0000 2026",
  "lang": "en",
  "bookmarkCount": 1994,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2027240562898976770",
  "displayTextRange": [
    0,
    279
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "hardmaru",
    "url": "https://x.com/hardmaru",
    "twitterUrl": "https://twitter.com/hardmaru",
    "id": "2895499182",
    "name": "hardmaru",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1678402467078234113/XN5Oy2UP_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/2895499182/1429351923",
    "description": "",
    "location": "Minato-ku, Tokyo",
    "followers": 387697,
    "following": 1813,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Mon Nov 10 11:05:07 +0000 2014",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 143443,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 4454,
    "statusesCount": 25772,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1990204623471395284"
    ],
    "profile_bio": {
      "description": "Co-Founder and CEO @SakanaAILabs 🎏",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                19,
                32
              ],
              "name": "",
              "screen_name": "SakanaAILabs"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "sakana.ai",
              "expanded_url": "https://sakana.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/cVQF43vIdu"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": {
    "binding_values": [
      {
        "key": "vanity_url",
        "value": {
          "scribe_key": "vanity_url",
          "string_value": "pub.sakana.ai"
        }
      },
      {
        "key": "domain",
        "value": {
          "string_value": "pub.sakana.ai"
        }
      },
      {
        "key": "title",
        "value": {
          "string_value": "Instant LLM Updates with Doc-to-LoRA and Text-to-LoRA"
        }
      },
      {
        "key": "card_url",
        "value": {
          "scribe_key": "card_url",
          "string_value": "https://t.co/e2b8G9LJKe"
        }
      }
    ],
    "card_platform": {
      "platform": {
        "audience": {
          "name": "production"
        },
        "device": {
          "name": "iPhone",
          "version": "13"
        }
      }
    },
    "name": "summary",
    "url": "https://t.co/e2b8G9LJKe",
    "user_refs_results": []
  },
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [
      {
        "display_url": "pub.sakana.ai/doc-to-lora/",
        "expanded_url": "https://pub.sakana.ai/doc-to-lora/",
        "indices": [
          256,
          279
        ],
        "url": "https://t.co/iHoifpsLMu"
      }
    ],
    "user_mentions": []
  },
  "quoted_tweet": {
    "type": "tweet",
    "id": "2027240298666209535",
    "url": "https://x.com/SakanaAILabs/status/2027240298666209535",
    "twitterUrl": "https://twitter.com/SakanaAILabs/status/2027240298666209535",
    "text": "We’re excited to introduce Doc-to-LoRA and Text-to-LoRA, two related research exploring how to make LLM customization faster and more accessible.\n\nhttps://t.co/ApVzVsBuv1\n\nBy training a Hypernetwork to generate LoRA adapters on the fly, these methods allow models to instantly internalize new information or adapt to new tasks.\n\nBiological systems naturally rely on two key cognitive abilities: durable long-term memory to store facts, and rapid adaptation to handle new tasks given limited sensory cues. While modern LLMs are highly capable, they still lack this flexibility. Traditionally, adding long-term memory or adapting an LLM to a specific downstream task requires an expensive and time-consuming model update, such as fine-tuning or context distillation, or relies on memory-intensive long prompts.\n\nTo bypass these limitations, our work focuses on the concept of cost amortization. We pay the meta-training cost once to train a hypernetwork capable of producing tasks or document specific LoRAs on demand. This turns what used to be a heavy engineering pipeline into a single, inexpensive forward pass. Instead of performing per-task optimization, the hypernetwork meta-learns update rules to instantly modify an LLM given a new task description or a long document.\n\nIn our experiments, Text-to-LoRA successfully specializes models to unseen tasks using just a natural language description. Building on this, Doc-to-LoRA is able to internalize factual documents. On a needle-in-a-haystack task, Doc-to-LoRA achieves near-perfect accuracy on instances five times longer than the base model's context window. It can even generalize to transfer visual information from a vision-language model into a text-only LLM, allowing it to classify images purely through internalized weights.\n\nImportantly, both methods run with sub-second latency, enabling rapid experimentation while avoiding the overhead of traditional model updates. This approach is a step towards lowering the technical barriers of model customization, allowing end-users to specialize foundation models via simple text inputs. We have released our code and papers for the community to explore.\n\nDoc-to-LoRA\nPaper: https://t.co/87xEEpf0GN\nCode: https://t.co/zBfQi2L9LW\n\nText-to-LoRA\nPaper: https://t.co/emLRZ4Vdvo\nCode: https://t.co/b9mrdoWWRB",
    "source": "Twitter for iPhone",
    "retweetCount": 346,
    "replyCount": 70,
    "likeCount": 2103,
    "quoteCount": 102,
    "viewCount": 550844,
    "createdAt": "Fri Feb 27 04:32:06 +0000 2026",
    "lang": "en",
    "bookmarkCount": 2020,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "2027240298666209535",
    "displayTextRange": [
      0,
      276
    ],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {
      "type": "user",
      "userName": "SakanaAILabs",
      "url": "https://x.com/SakanaAILabs",
      "twitterUrl": "https://twitter.com/SakanaAILabs",
      "id": "218811492",
      "name": "Sakana AI",
      "isVerified": false,
      "isBlueVerified": true,
      "verifiedType": "Business",
      "profilePicture": "https://pbs.twimg.com/profile_images/1885939209388929024/dtnrOdGp_normal.jpg",
      "coverPicture": "https://pbs.twimg.com/profile_banners/218811492/1686643464",
      "description": "",
      "location": "Tokyo, Japan",
      "followers": 62135,
      "following": 0,
      "status": "",
      "canDm": false,
      "canMediaTag": true,
      "createdAt": "Tue Nov 23 10:20:07 +0000 2010",
      "entities": {
        "description": {
          "urls": []
        },
        "url": {}
      },
      "fastFollowersCount": 0,
      "favouritesCount": 1,
      "hasCustomTimelines": true,
      "isTranslator": false,
      "mediaCount": 317,
      "statusesCount": 899,
      "withheldInCountries": [],
      "affiliatesHighlightedLabel": {},
      "possiblySensitive": false,
      "pinnedTweetIds": [
        "1823178623513239992"
      ],
      "profile_bio": {
        "description": "Sakana AI is an AI R&D company based in Tokyo. We want to develop AI solutions for Japan’s needs, and democratize AI in Japan. https://t.co/OWL1EpmWtn",
        "entities": {
          "description": {
            "hashtags": [],
            "symbols": [],
            "urls": [
              {
                "display_url": "sakana.ai/careers",
                "expanded_url": "http://sakana.ai/careers",
                "indices": [
                  127,
                  150
                ],
                "url": "https://t.co/OWL1EpmWtn"
              }
            ],
            "user_mentions": []
          },
          "url": {
            "urls": [
              {
                "display_url": "sakana.ai",
                "expanded_url": "https://sakana.ai/",
                "indices": [
                  0,
                  23
                ],
                "url": "https://t.co/1m2lSgnfB2"
              }
            ]
          }
        }
      },
      "isAutomated": false,
      "automatedBy": null
    },
    "extendedEntities": {
      "media": [
        {
          "display_url": "pic.twitter.com/gId3J6hgEr",
          "expanded_url": "https://twitter.com/SakanaAILabs/status/2027240298666209535/photo/1",
          "ext_media_availability": {
            "status": "Available"
          },
          "id_str": "2027240151857176577",
          "indices": [
            277,
            300
          ],
          "media_key": "16_2027240151857176577",
          "media_results": {
            "id": "QXBpTWVkaWFSZXN1bHRzOgwAAgoAARwiNC1l2yABCgACHCI0T5RbMP8AAA==",
            "result": {
              "__typename": "ApiMedia",
              "id": "QXBpTWVkaWE6DAACCgABHCI0LWXbIAEKAAIcIjRPlFsw/wAA",
              "media_key": "16_2027240151857176577"
            }
          },
          "media_url_https": "https://pbs.twimg.com/tweet_video_thumb/HCI0LWXbIAEKKAD.jpg",
          "original_info": {
            "focus_rects": [],
            "height": 480,
            "width": 1080
          },
          "sizes": {
            "large": {
              "h": 480,
              "w": 1080
            }
          },
          "type": "animated_gif",
          "url": "https://t.co/gId3J6hgEr",
          "video_info": {
            "aspect_ratio": [
              9,
              4
            ],
            "variants": [
              {
                "bitrate": 0,
                "content_type": "video/mp4",
                "url": "https://video.twimg.com/tweet_video/HCI0LWXbIAEKKAD.mp4"
              }
            ]
          }
        }
      ]
    },
    "card": null,
    "place": {},
    "entities": {
      "hashtags": [],
      "symbols": [],
      "urls": [
        {
          "display_url": "pub.sakana.ai/doc-to-lora/",
          "expanded_url": "https://pub.sakana.ai/doc-to-lora/",
          "indices": [
            147,
            170
          ],
          "url": "https://t.co/ApVzVsBuv1"
        },
        {
          "display_url": "arxiv.org/abs/2602.15902",
          "expanded_url": "https://arxiv.org/abs/2602.15902",
          "indices": [
            2186,
            2209
          ],
          "url": "https://t.co/87xEEpf0GN"
        },
        {
          "display_url": "github.com/SakanaAI/Doc-t…",
          "expanded_url": "https://github.com/SakanaAI/Doc-to-LoRA",
          "indices": [
            2216,
            2239
          ],
          "url": "https://t.co/zBfQi2L9LW"
        },
        {
          "display_url": "arxiv.org/abs/2506.06105",
          "expanded_url": "https://arxiv.org/abs/2506.06105",
          "indices": [
            2261,
            2284
          ],
          "url": "https://t.co/emLRZ4Vdvo"
        },
        {
          "display_url": "github.com/SakanaAI/Text-…",
          "expanded_url": "https://github.com/SakanaAI/Text-to-LoRA",
          "indices": [
            2291,
            2314
          ],
          "url": "https://t.co/b9mrdoWWRB"
        }
      ],
      "user_mentions": []
    },
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "isLimitedReply": false,
    "article": null
  },
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}