@jerryjliu0
Fine-tuning an LLM directly on retrieval augmented input prompts is a powerful idea to improve RAG systems 🔥: 💡 Encourage LLM to better use relevant context 💡 If the retrieved context is bad, encourage LLM to ignore it and still synthesize a correct answer! We were inspired by the recent RA-DIT paper (@VictoriaLinML et al.), which implemented this LLM fine-tuning strategy as part of their overall approach towards fine-tuning LLMs + RAG. We did a read of the technique in the paper, and implemented a guide on how to do this in @llama_index! See left 🖼️ for diagram, right 🖼️ for results. Guide: https://t.co/qJlHzgke73 Results 🧪: We see increases in correctness/semantic similarity with the “ground-truth” responses. Note ⚠️: we didn’t implement the retrieval fine-tuning technique in RA-DIT since we don’t have access to LLM log-probs.