🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Recurrent Memory Finds What LLMs Miss Explores the capability of transformer-based models in extremely long context processing. Finds that both GPT-4 and RAG performance heavily rely on the first 25% of the input, which means there is room for improved context processing mechanisms. The paper reports that recurrent memory augmentation of transformer models achieves superior performance on documents of up to 10 million tokens. The recurrent memory seems to enable effective multi-hop reasoning which is challenging for current LLMs and RAG systems. It also has the desirable effect of filtering out irrelevant information which is key in long context processing. With all the recent releases of long context models, this is an interesting and timely paper. I like the idea of combining both the recurrent memory and retrieval to these large models to make them more generalizable for complex tasks that require long context processing.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1759591371126571028/media_0.jpg",
      "type": "photo",
      "original_url": "https://pbs.twimg.com/media/GGtQ4AKWgAA-tQq.jpg",
      "download_date": "2025-08-13T05:51:34.684401",
      "stored_in_supabase": true,
      "format_converted_from_list": true
    }
  ],
  "conversion_date": "2025-08-13T00:32:25.913690",
  "format_converted": true,
  "original_structure": "had_media_only"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with LLMs, RAG, and AI Agents @dair_ai • Prev: Meta AI, Galactica LLM, PapersWithCode, PhD • Creator of the Prompting Guide (~3M learners)",
    "fast_followers_count": 0,
    "favourites_count": 24671,
    "followers_count": 183512,
    "friends_count": 465,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3278,
    "location": "",
    "media_count": 1948,
    "name": "elvis",
    "normal_followers_count": 183512,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 10319,
    "translator_type": "regular",
    "url": "https://t.co/H9w2yq9w1L",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1759591371126571028",
  "conversation_id": "1759591371126571028",
  "full_text": "Recurrent Memory Finds What LLMs Miss\n\nExplores the capability of transformer-based models in extremely long context processing. \n\nFinds that both GPT-4 and RAG performance heavily rely on the first 25% of the input, which means there is room for improved context processing mechanisms.\n\nThe paper reports that recurrent memory augmentation of transformer models achieves superior performance on documents of up to 10 million tokens. \n\nThe recurrent memory seems to enable effective multi-hop reasoning which is challenging for current LLMs and RAG systems. It also has the desirable effect of filtering out irrelevant information which is key in long context processing.\n\nWith all the recent releases of long context models, this is an interesting and timely paper. I like the idea of combining both the recurrent memory and retrieval to these large models to make them more generalizable for complex tasks that require long context processing.",
  "reply_count": 3,
  "retweet_count": 80,
  "favorite_count": 387,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GGtQ4AKWgAA-tQq.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1759591371126571028",
  "created_at": "2024-02-19T14:50:49.000Z",
  "#sort_index": "1759591371126571028",
  "view_count": 37730,
  "quote_count": 2,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/omarsar0/status/1759591371126571028"
}