🐦 Twitter Post Details

Viewing enriched Twitter post

@dair_ai

RT @omarsar0: NEW research from Sakana AI. Long contexts get expensive as every token in the input contributes to quadratic attention costs, higher latency, and more memory. This new research introduces Doc-to-LoRA, a lightweight hypernetwork that meta-learns to compress long documents into LoRA adapters in a SINGLE forward pass. In other words, it can instantly internalize contexts. Instead of re-reading the full context at every inference call, the model internalizes the document into compact adapter weights. No iterative fine-tuning is needed, and no repeated context consumption. Cool to see all the interesting new approaches to deal with long contexts like RLM, LCM, and now Doc-to-LoRA. The results: Near-perfect accuracy on needle-in-a-haystack tasks at sequence lengths exceeding the target model's native context window by over 4x. It also outperforms standard context distillation while significantly reducing peak memory consumption and update latency on real-world QA datasets. Why it matters: As agents and LLM applications deal with increasingly long documents, turning context into compact adapters on the fly could drastically reduce serving costs and enable rapid knowledge updates. Paper: https://t.co/Fh1IeLrSpm Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX

πŸ“Š Media Metadata

{
  "score": 0.38,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.08000000000000002,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-03-01T12:15:22.156290",
  "import_source": "api_import",
  "source_tagged_at": "2026-03-01T12:15:22.156306",
  "enriched": true,
  "enriched_at": "2026-03-01T12:15:22.156310"
}

πŸ”§ Raw API Response

{
  "type": "tweet",
  "id": "2027386380754571314",
  "url": "https://x.com/dair_ai/status/2027386380754571314",
  "twitterUrl": "https://twitter.com/dair_ai/status/2027386380754571314",
  "text": "RT @omarsar0: NEW research from Sakana AI.\n\nLong contexts get expensive as every token in the input contributes to quadratic attention cost…",
  "source": "Twitter for iPhone",
  "retweetCount": 42,
  "replyCount": 19,
  "likeCount": 277,
  "quoteCount": 4,
  "viewCount": 19762,
  "createdAt": "Fri Feb 27 14:12:34 +0000 2026",
  "lang": "en",
  "bookmarkCount": 219,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2027386380754571314",
  "displayTextRange": [
    0,
    140
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "dair_ai",
    "url": "https://x.com/dair_ai",
    "twitterUrl": "https://twitter.com/dair_ai",
    "id": "889050642903293953",
    "name": "DAIR.AI",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1643277398522187778/31dedbLo_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/889050642903293953/1742055232",
    "description": "",
    "location": "",
    "followers": 90586,
    "following": 1,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Sun Jul 23 09:12:45 +0000 2017",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 4185,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 161,
    "statusesCount": 2963,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2028094132090966088"
    ],
    "profile_bio": {
      "description": "Democratizing AI research, education, and technologies.",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": []
        },
        "url": {
          "urls": [
            {
              "display_url": "dair.ai",
              "expanded_url": "https://www.dair.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/lkqPZtMmfU"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "timestamps": [],
    "urls": [],
    "user_mentions": [
      {
        "id_str": "3448284313",
        "indices": [
          3,
          12
        ],
        "name": "elvis",
        "screen_name": "omarsar0"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": {
    "type": "tweet",
    "id": "2027385998993420571",
    "url": "https://x.com/omarsar0/status/2027385998993420571",
    "twitterUrl": "https://twitter.com/omarsar0/status/2027385998993420571",
    "text": "NEW research from Sakana AI.\n\nLong contexts get expensive as every token in the input contributes to quadratic attention costs, higher latency, and more memory.\n\nThis new research introduces Doc-to-LoRA, a lightweight hypernetwork that meta-learns to compress long documents into LoRA adapters in a SINGLE forward pass.\n\nIn other words, it can instantly internalize contexts.\n\nInstead of re-reading the full context at every inference call, the model internalizes the document into compact adapter weights. No iterative fine-tuning is needed, and no repeated context consumption.\n\nCool to see all the interesting new approaches to deal with long contexts like RLM, LCM, and now Doc-to-LoRA.\n\nThe results:\n\nNear-perfect accuracy on needle-in-a-haystack tasks at sequence lengths exceeding the target model's native context window by over 4x.\n\nIt also outperforms standard context distillation while significantly reducing peak memory consumption and update latency on real-world QA datasets.\n\nWhy it matters:\n\nAs agents and LLM applications deal with increasingly long documents, turning context into compact adapters on the fly could drastically reduce serving costs and enable rapid knowledge updates.\n\nPaper: https://t.co/Fh1IeLrSpm\n\nLearn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX",
    "source": "Twitter for iPhone",
    "retweetCount": 42,
    "replyCount": 19,
    "likeCount": 277,
    "quoteCount": 4,
    "viewCount": 19762,
    "createdAt": "Fri Feb 27 14:11:03 +0000 2026",
    "lang": "en",
    "bookmarkCount": 219,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "2027385998993420571",
    "displayTextRange": [
      0,
      279
    ],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {
      "type": "user",
      "userName": "omarsar0",
      "url": "https://x.com/omarsar0",
      "twitterUrl": "https://twitter.com/omarsar0",
      "id": "3448284313",
      "name": "elvis",
      "isVerified": false,
      "isBlueVerified": true,
      "verifiedType": null,
      "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
      "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
      "description": "",
      "location": "DAIR.AI Academy",
      "followers": 291571,
      "following": 776,
      "status": "",
      "canDm": true,
      "canMediaTag": true,
      "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
      "entities": {
        "description": {
          "urls": []
        },
        "url": {}
      },
      "fastFollowersCount": 0,
      "favouritesCount": 34909,
      "hasCustomTimelines": true,
      "isTranslator": true,
      "mediaCount": 4525,
      "statusesCount": 17379,
      "withheldInCountries": [],
      "affiliatesHighlightedLabel": {},
      "possiblySensitive": false,
      "pinnedTweetIds": [
        "2028103978190590118"
      ],
      "profile_bio": {
        "description": "Building @dair_ai β€’ Prev: Meta AI, Elastic, PhD β€’ New AI learning portal: https://t.co/1e8RZKs4uX",
        "entities": {
          "description": {
            "hashtags": [],
            "symbols": [],
            "urls": [
              {
                "display_url": "academy.dair.ai",
                "expanded_url": "https://academy.dair.ai/",
                "indices": [
                  74,
                  97
                ],
                "url": "https://t.co/1e8RZKs4uX"
              }
            ],
            "user_mentions": [
              {
                "id_str": "0",
                "indices": [
                  9,
                  17
                ],
                "name": "",
                "screen_name": "dair_ai"
              }
            ]
          },
          "url": {
            "urls": [
              {
                "display_url": "dair.ai",
                "expanded_url": "https://www.dair.ai/",
                "indices": [
                  0,
                  23
                ],
                "url": "https://t.co/XQto5ypSIk"
              }
            ]
          }
        }
      },
      "isAutomated": false,
      "automatedBy": null
    },
    "extendedEntities": {
      "media": [
        {
          "display_url": "pic.twitter.com/ymP0DPl3Gq",
          "expanded_url": "https://twitter.com/omarsar0/status/2027385998993420571/photo/1",
          "ext_media_availability": {
            "status": "Available"
          },
          "features": {
            "large": {
              "faces": [
                {
                  "h": 94,
                  "w": 94,
                  "x": 779,
                  "y": 1231
                }
              ]
            },
            "orig": {
              "faces": [
                {
                  "h": 94,
                  "w": 94,
                  "x": 779,
                  "y": 1231
                }
              ]
            }
          },
          "id_str": "2027385995415736320",
          "indices": [
            280,
            303
          ],
          "media_key": "3_2027385995415736320",
          "media_results": {
            "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARwiuNI/W1AACgACHCK40xSacRsAAA==",
            "result": {
              "__typename": "ApiMedia",
              "id": "QXBpTWVkaWE6DAABCgABHCK40j9bUAAKAAIcIrjTFJpxGwAA",
              "media_key": "3_2027385995415736320"
            }
          },
          "media_url_https": "https://pbs.twimg.com/media/HCK40j9bUAAwMRe.jpg",
          "original_info": {
            "focus_rects": [
              {
                "h": 894,
                "w": 1596,
                "x": 0,
                "y": 0
              },
              {
                "h": 1596,
                "w": 1596,
                "x": 0,
                "y": 0
              },
              {
                "h": 1754,
                "w": 1539,
                "x": 0,
                "y": 0
              },
              {
                "h": 1754,
                "w": 877,
                "x": 44,
                "y": 0
              },
              {
                "h": 1754,
                "w": 1596,
                "x": 0,
                "y": 0
              }
            ],
            "height": 1754,
            "width": 1596
          },
          "sizes": {
            "large": {
              "h": 1754,
              "w": 1596
            }
          },
          "type": "photo",
          "url": "https://t.co/ymP0DPl3Gq"
        }
      ]
    },
    "card": null,
    "place": {},
    "entities": {
      "hashtags": [],
      "symbols": [],
      "urls": [
        {
          "display_url": "arxiv.org/abs/2602.15902",
          "expanded_url": "https://arxiv.org/abs/2602.15902",
          "indices": [
            1211,
            1234
          ],
          "url": "https://t.co/Fh1IeLrSpm"
        },
        {
          "display_url": "academy.dair.ai",
          "expanded_url": "https://academy.dair.ai/",
          "indices": [
            1287,
            1310
          ],
          "url": "https://t.co/1e8RZKs4uX"
        }
      ],
      "user_mentions": []
    },
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "isLimitedReply": false,
    "article": null
  },
  "isLimitedReply": false,
  "article": null
}