@llama_index
A key concept we’ve been playing around is “chunk dreaming” (s/o @tomchapin) 💭 Given a text chunk, auto-extract metadata like questions it can answer and also summaries over adjacent nodes. Better context -> better performing RAG. Brand-new guide 💫: https://t.co/tMrp4T9Teg https://t.co/me5XVTUk8G