🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

There is a lot of evidence that RAG systems struggle with long-context problems. This challenge stems from both the limitations of LLMs (e.g., lost in the middle) and inefficiencies due to the retriever (e.g., incomplete key information). This work proposes LongRAG to enhance RAG's understanding of long-context knowledge which include global information and factual details. LongRAG consists of a hybrid retriever, an LLM-augmented information extractor, a CoT-guided filter, and an LLM-augmented generator. These are key components that enables the RAG system to mine global long-context information and effectively identify factual details. LongRAG outperforms long-context LLMs (up by 6.94%), advanced RAG (up by 6.16%), and Vanilla RAG (up by 17.25%). I often think that the RAG systems of today are just the beginning and there is still a lot more exploration and innovation that's on the horizon for RAG systems. This paper is another example of cleverly assembling a hybrid RAG system that incorporates specific existing components/approaches aimed at enhancing different parts and addressing their inefficiencies.

Media 1

📊 Media Metadata

{
  "data": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/Gaq5gowaAAI02bP.png",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.549888",
  "import_source": "network_archive_import",
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1849494571946066295/media_0.png?",
      "filename": "media_0.png"
    }
  ],
  "reprocessed_at": "2025-08-12T15:25:19.871640",
  "reprocessed_reason": "missing_media_array"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27933,
    "followers_count": 216711,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3690,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216711,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1849494571946066295",
  "conversation_id": "1849494571946066295",
  "full_text": "There is a lot of evidence that RAG systems struggle with long-context problems.\n\nThis challenge stems from both the limitations of LLMs (e.g., lost in the middle) and inefficiencies due to the retriever (e.g., incomplete key information).\n\nThis work proposes LongRAG to enhance RAG's understanding of long-context knowledge which include global information and factual details.\n\nLongRAG consists of a hybrid retriever, an LLM-augmented information extractor, a CoT-guided filter, and an LLM-augmented generator. These are key components that enables the RAG system to mine global long-context information and effectively identify factual details.\n\nLongRAG outperforms long-context LLMs (up by 6.94%), advanced RAG (up by 6.16%), and Vanilla RAG (up by 17.25%).\n\nI often think that the RAG systems of today are just the beginning and there is still a lot more exploration and innovation that's on the horizon for RAG systems. This paper is another example of cleverly assembling a hybrid RAG system that incorporates specific existing components/approaches aimed at enhancing different parts and addressing their inefficiencies.",
  "reply_count": 11,
  "retweet_count": 84,
  "favorite_count": 460,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/Gaq5gowaAAI02bP.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1849494571946066295",
  "created_at": "2024-10-24T16:54:02.000Z",
  "#sort_index": "1849494571946066295",
  "view_count": 38411,
  "quote_count": 5,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1849494571946066295"
}