🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Long-Context LLMs Meet RAG For many long-context LLMs, the quality of outputs declines as the number of passages increases. It seems that the performance loss is due to retrieved hard negatives. They propose two ways to improve long-context LLM-based RAG: 1) retrieval reordering and RAG-specific tuning with intermediate reasoning to help with relevance identification. "Our proposed approaches show significant accuracy and robustness improvements on long-context RAG performance." I've recently seen a few research papers trying to make long-context LLM-based RAG work. It looks like order of passages and reasoning for retrieving relevant information produce the biggest performance gains.

📊 Media Metadata

{
  "media": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/GZomC1eWwAANJai.png",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "nlp": {
    "sentiment": "positive",
    "processed_at": "2025-08-06T12:56:36.911281"
  },
  "original_structure": "had_media_only"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27933,
    "followers_count": 216709,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3688,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216709,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1844828836619334066",
  "conversation_id": "1844828836619334066",
  "full_text": "Long-Context LLMs Meet RAG\n\nFor many long-context LLMs, the quality of outputs declines as the number of passages increases.\n\nIt seems that the performance loss is due to retrieved hard negatives. \n\nThey propose two ways to improve long-context LLM-based RAG: 1) retrieval reordering and RAG-specific tuning with intermediate reasoning to help with relevance identification.\n\n\"Our proposed approaches show significant accuracy and robustness improvements on long-context RAG performance.\"\n\nI've recently seen a few research papers trying to make long-context LLM-based RAG work. It looks like order of passages and reasoning for retrieving relevant information produce the biggest performance gains.",
  "reply_count": 5,
  "retweet_count": 100,
  "favorite_count": 507,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GZomC1eWwAANJai.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1844828836619334066",
  "created_at": "2024-10-11T19:54:04.000Z",
  "#sort_index": "1844828836619334066",
  "view_count": 72228,
  "quote_count": 4,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1844828836619334066"
}