🐦 Twitter Post Details

Viewing enriched Twitter post

@jerryjliu0

Building good RAG systems is hard, but building LLM-powered QA systems that can scale to large #’s of docs and question types is even harder 📑 We’re excited to introduce multi-document agents (V0) - a step beyond “naive” top-k RAG. Using multi-document agents allows our system to answer a broad set of questions, some of which aren’t really possible with “basic” RAG: ✅ fact-based QA over single doc ✅ Summarization over single doc ✅ fact-based comparisons over multiple docs ✅ Holistic comparisons across multiple docs Our agent architecture allows answering these types of questions while scaling to large # docs: 📄🤖: Per doc, setup a document agent that can do joint QA / summarization 📚🤖: Setup a multi-document agent over these sub-agents/docs. 🛠️🔎: Instead of retrieving all tools/docs at query-time, retrieve top-k tools, and selectively pick the docs/tools to query. This is v0, there’s way more to be done/improve. Next steps: parallel query planning (instead of relying only on CoT), adding in structured data, reducing latency, and more. Full guide: https://t.co/745IjKThaG

🔧 Raw API Response

{
  "user": {
    "created_at": "2011-09-07T22:54:31.000Z",
    "default_profile_image": false,
    "description": "co-founder/CEO @llama_index\n\nEx-ML @robusthq,  AI research @Uber_ATG, ML Eng @Quora, @princeton",
    "fast_followers_count": 0,
    "favourites_count": 3927,
    "followers_count": 23788,
    "friends_count": 1156,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 610,
    "location": "",
    "media_count": 592,
    "name": "Jerry Liu",
    "normal_followers_count": 23788,
    "possibly_sensitive": false,
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1283610285031460864/1Q4zYhtb_normal.jpg",
    "screen_name": "jerryjliu0",
    "statuses_count": 2708,
    "translator_type": "none",
    "url": "https://t.co/S7FkTSefQ0",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "369777416"
  },
  "id": "1708523212366393403",
  "conversation_id": "1708523212366393403",
  "full_text": "Building good RAG systems is hard, but building LLM-powered QA systems that can scale to large #’s of docs and question types is even harder 📑\n\nWe’re excited to introduce multi-document agents (V0) - a step beyond “naive” top-k RAG. Using multi-document agents allows our system to answer a broad set of questions, some of which aren’t really possible with “basic” RAG:\n\n✅ fact-based QA over single doc\n✅ Summarization over single doc\n✅ fact-based comparisons over multiple docs\n✅ Holistic comparisons across multiple docs\n\nOur agent architecture allows answering these types of questions while scaling to large # docs:\n\n📄🤖: Per doc, setup a document agent that can do joint QA / summarization\n📚🤖: Setup a multi-document agent over these sub-agents/docs.\n🛠️🔎: Instead of retrieving all tools/docs at query-time, retrieve top-k tools, and selectively pick the docs/tools to query.\n\nThis is v0, there’s way more to be done/improve. Next steps: parallel query planning (instead of relying only on CoT), adding in structured data, reducing latency, and more.\n\nFull guide: https://t.co/745IjKThaG",
  "reply_count": 17,
  "retweet_count": 112,
  "favorite_count": 714,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F7XkuoUbYAAd8Cv.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/jerryjliu0/status/1708523212366393403",
  "created_at": "2023-10-01T16:44:11.000Z",
  "#sort_index": "1708523212366393403",
  "view_count": 141284,
  "quote_count": 6,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/jerryjliu0/status/1708523212366393403"
}