🐦 Twitter Post Details

Viewing enriched Twitter post

@jerryjliu0

Fine-tuning an LLM directly on retrieval augmented input prompts is a powerful idea to improve RAG systems 🔥: 💡 Encourage LLM to better use relevant context 💡 If the retrieved context is bad, encourage LLM to ignore it and still synthesize a correct answer! We were inspired by the recent RA-DIT paper (@VictoriaLinML et al.), which implemented this LLM fine-tuning strategy as part of their overall approach towards fine-tuning LLMs + RAG. We did a read of the technique in the paper, and implemented a guide on how to do this in @llama_index! See left 🖼️ for diagram, right 🖼️ for results. Guide: https://t.co/qJlHzgke73 Results 🧪: We see increases in correctness/semantic similarity with the “ground-truth” responses. Note ⚠️: we didn’t implement the retrieval fine-tuning technique in RA-DIT since we don’t have access to LLM log-probs.

🔧 Raw API Response

{
  "user": {
    "created_at": "2011-09-07T22:54:31.000Z",
    "default_profile_image": false,
    "description": "co-founder/CEO @llama_index\n\nEx-ML @robusthq,  AI research @Uber_ATG, ML Eng @Quora, @princeton",
    "fast_followers_count": 0,
    "favourites_count": 3927,
    "followers_count": 23788,
    "friends_count": 1156,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 610,
    "location": "",
    "media_count": 592,
    "name": "Jerry Liu",
    "normal_followers_count": 23788,
    "possibly_sensitive": false,
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1283610285031460864/1Q4zYhtb_normal.jpg",
    "screen_name": "jerryjliu0",
    "statuses_count": 2708,
    "translator_type": "none",
    "url": "https://t.co/S7FkTSefQ0",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "369777416"
  },
  "id": "1709646787076935818",
  "conversation_id": "1709646787076935818",
  "full_text": "Fine-tuning an LLM directly on retrieval augmented input prompts is a powerful idea to improve RAG systems 🔥:\n\n💡 Encourage LLM to better use relevant context\n💡 If the retrieved context is bad, encourage LLM to ignore it and still synthesize a correct answer!\n\nWe were inspired by the recent RA-DIT paper (@VictoriaLinML et al.), which implemented this LLM fine-tuning strategy as part of their overall approach towards fine-tuning LLMs + RAG.\n\nWe did a read of the technique in the paper, and implemented a guide on how to do this in @llama_index! See left 🖼️ for diagram, right 🖼️ for results.\n\nGuide: https://t.co/qJlHzgke73\n\nResults 🧪: We see increases in correctness/semantic similarity with the “ground-truth” responses.\n\nNote ⚠️: we didn’t implement the retrieval fine-tuning technique in RA-DIT since we don’t have access to LLM log-probs.",
  "reply_count": 6,
  "retweet_count": 44,
  "favorite_count": 308,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F7nifZ8aIAAIkax.png",
      "type": "photo"
    },
    {
      "media_url": "https://pbs.twimg.com/media/F7nif1GbMAACuee.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/jerryjliu0/status/1709646787076935818",
  "created_at": "2023-10-04T19:08:52.000Z",
  "#sort_index": "1709646787076935818",
  "view_count": 53777,
  "quote_count": 2,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/jerryjliu0/status/1709646787076935818"
}