🐦 Twitter Post Details

Viewing enriched Twitter post

@dair_ai

Adaptive retrieval is the way to go! And this RouteRAG paper shows why. Let's talk about it: RAG systems have a retrieval problem. The default approach to multi-hop reasoning today relies on fixed retrieval pipelines. It typically involves fetching text + maybe graph data, and hope everything is retrieved in one-shot. But the reality is that complex questions and real-world tasks require adaptive retrieval. Sometimes you need text. Sometimes you need relational structure from a graph. Sometimes it will be great to use both. And it's not secrete that graph retrieval is expensive, so retrieving it unnecessarily wastes compute. This new research introduces RouteRAG, an RL-based framework that teaches LLMs to make adaptive retrieval decisions during reasoning. When to retrieve, what source to retrieve from, and when to stop. The model learns a unified generation policy through two-stage training. > Stage 1 optimizes for answer correctness, establishing reasoning capability. > Stage 2 adds an efficiency reward that discourages unnecessary retrieval, teaching the model to balance accuracy against computational cost. The action space includes three retrieval modes: passage-only, graph-only, or hybrid. The model dynamically selects based on evolving query needs. Text retrieval works well for simple questions. Graph retrieval shines for multi-hop reasoning. The policy learns when each is appropriate. Results across five QA benchmarks: RouteRAG-7B achieves 60.6 average F1, outperforming Search-R1 (56.8 F1) despite being trained on only 10k examples versus 170k. On multi-hop datasets like 2Wiki, it reaches 64.6 F1 compared to 58.9 for Search-R1. The efficiency gains are also substantial. RouteRAG-7B reduces average retrieval turns by 20% compared to training without the efficiency reward, while actually improving accuracy by 1.1 F1 points. So we get best of both worlds: fewer retrieval calls and better answers. And here is something exciting: Small models also approach large model performance. RouteRAG with Qwen2.5-3B surpasses several graph-based RAG systems built on GPT-4o-mini, suggesting that improving the retrieval policy can be as impactful as scaling the backbone. Teaching models when and what to retrieve through RL yields more efficient and accurate multi-hop reasoning than scaling training data or model size alone. Paper: https://t.co/a4J6oAX0GC Learn to build RAG and effective AI Agents in our academy: https://t.co/zQXQt0PMbG

Media 1
Media 2

📊 Media Metadata

{
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000400449355325806/media_0.jpg?",
      "filename": "media_0.jpg"
    },
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000400449355325806/media_1.png?",
      "filename": "media_1.png"
    }
  ],
  "processed_at": "2025-12-15T03:42:12.535783",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2000400449355325806",
  "url": "https://x.com/dair_ai/status/2000400449355325806",
  "twitterUrl": "https://twitter.com/dair_ai/status/2000400449355325806",
  "text": "Adaptive retrieval is the way to go!\n\nAnd this RouteRAG paper shows why.\n\nLet's talk about it:\n\nRAG systems have a retrieval problem. The default approach to multi-hop reasoning today relies on fixed retrieval pipelines. It typically involves fetching text + maybe graph data, and hope everything is retrieved in one-shot.\n\nBut the reality is that complex questions and real-world tasks require adaptive retrieval. Sometimes you need text. Sometimes you need relational structure from a graph. Sometimes it will be great to use both. And it's not secrete that graph retrieval is expensive, so retrieving it unnecessarily wastes compute.\n\nThis new research introduces RouteRAG, an RL-based framework that teaches LLMs to make adaptive retrieval decisions during reasoning. When to retrieve, what source to retrieve from, and when to stop.\n\nThe model learns a unified generation policy through two-stage training.\n\n> Stage 1 optimizes for answer correctness, establishing reasoning capability.\n\n> Stage 2 adds an efficiency reward that discourages unnecessary retrieval, teaching the model to balance accuracy against computational cost.\n\nThe action space includes three retrieval modes: passage-only, graph-only, or hybrid. The model dynamically selects based on evolving query needs. Text retrieval works well for simple questions. Graph retrieval shines for multi-hop reasoning. The policy learns when each is appropriate.\n\nResults across five QA benchmarks: RouteRAG-7B achieves 60.6 average F1, outperforming Search-R1 (56.8 F1) despite being trained on only 10k examples versus 170k. On multi-hop datasets like 2Wiki, it reaches 64.6 F1 compared to 58.9 for Search-R1.\n\nThe efficiency gains are also substantial. RouteRAG-7B reduces average retrieval turns by 20% compared to training without the efficiency reward, while actually improving accuracy by 1.1 F1 points. So we get best of both worlds: fewer retrieval calls and better answers.\n\nAnd here is something exciting:\n\nSmall models also approach large model performance. RouteRAG with Qwen2.5-3B surpasses several graph-based RAG systems built on GPT-4o-mini, suggesting that improving the retrieval policy can be as impactful as scaling the backbone.\n\nTeaching models when and what to retrieve through RL yields more efficient and accurate multi-hop reasoning than scaling training data or model size alone.\n\nPaper: https://t.co/a4J6oAX0GC\n\nLearn to build RAG and effective AI Agents in our academy: https://t.co/zQXQt0PMbG",
  "source": "Twitter for iPhone",
  "retweetCount": 5,
  "replyCount": 0,
  "likeCount": 25,
  "quoteCount": 0,
  "viewCount": 1578,
  "createdAt": "Mon Dec 15 03:00:07 +0000 2025",
  "lang": "en",
  "bookmarkCount": 22,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2000400449355325806",
  "displayTextRange": [
    0,
    281
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "dair_ai",
    "url": "https://x.com/dair_ai",
    "twitterUrl": "https://twitter.com/dair_ai",
    "id": "889050642903293953",
    "name": "DAIR.AI",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1643277398522187778/31dedbLo_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/889050642903293953/1742055232",
    "description": "",
    "location": "",
    "followers": 83289,
    "following": 1,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Sun Jul 23 09:12:45 +0000 2017",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 3886,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 88,
    "statusesCount": 2660,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2000219246601920671"
    ],
    "profile_bio": {
      "description": "Democratizing AI research, education, and technologies.",
      "entities": {
        "description": {},
        "url": {
          "urls": [
            {
              "display_url": "dair.ai",
              "expanded_url": "https://www.dair.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/lkqPZtMmfU"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/X2VFTVhPtL",
        "expanded_url": "https://twitter.com/dair_ai/status/2000400449355325806/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "2000400445295181824",
        "indices": [
          282,
          305
        ],
        "media_key": "3_2000400445295181824",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvC2Zs+WkAACgACG8LZnDBbIW4AAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG8LZmz5aQAAKAAIbwtmcMFshbgAA",
            "media_key": "3_2000400445295181824"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G8LZmz5aQAAv8vE.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 808,
              "w": 1442,
              "x": 0,
              "y": 0
            },
            {
              "h": 1442,
              "w": 1442,
              "x": 0,
              "y": 0
            },
            {
              "h": 1644,
              "w": 1442,
              "x": 0,
              "y": 0
            },
            {
              "h": 1800,
              "w": 900,
              "x": 0,
              "y": 0
            },
            {
              "h": 1800,
              "w": 1442,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1800,
          "width": 1442
        },
        "sizes": {
          "large": {
            "h": 1800,
            "w": 1442
          }
        },
        "type": "photo",
        "url": "https://t.co/X2VFTVhPtL"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "arxiv.org/abs/2512.09487",
        "expanded_url": "https://arxiv.org/abs/2512.09487",
        "indices": [
          2377,
          2400
        ],
        "url": "https://t.co/a4J6oAX0GC"
      },
      {
        "display_url": "dair-ai.thinkific.com",
        "expanded_url": "https://dair-ai.thinkific.com/",
        "indices": [
          2461,
          2484
        ],
        "url": "https://t.co/zQXQt0PMbG"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}