@jerryjliu0
Dropping the first few videos on my knowledge assistant video series 👇 Step 1: Figure out how to define an agentic workflow on top of your standard RAG endpoints that can use LLMs to reason before your retrieval layer and afterwards. This lets you build more sophisticated research assistants that let you answer more complex questions than your standard QA chatbot. Intro video: https://t.co/FGe2tvGf05 Auto-retrieval: use LLMs to reason over vector dbs as tools and infer metadata filters. YT: https://t.co/yFoz7qZ7Vf Corrective RAG: use LLMs to reason over the output of retrieval and determine whether you’d want to do web search: https://t.co/lBc2CgqLww It uses LlamaCloud which you can signup for here: https://t.co/XYZmx5TFz8 If you don't have access to LlamaCloud yet, don't fret! You can always use our standard VectorStoreIndex abstraction for now.
YouTube
From RAG to Knowledge Assistants
This article discusses the potential of LLMs to answer complex questions using diverse data sources....
• LLMs can solve complex tasks across multiple data sources.
• The transition from RAG to Knowledge Assistants is significant.
YouTube
Advanced RAG: Auto-Retrieval (with LlamaCloud)
This guide demonstrates how to create an auto-retrieval pipeline using LlamaCloud on a research document corpus....
• Building an auto-retrieval pipeline
• Utilizing LlamaCloud retrievers
YouTube
Advanced RAG: Corrective RAG (with LlamaCloud)
This video guide demonstrates how to utilize LlamaCloud and Tavily AI to create a Corrective RAG workflow....
• Introduction to Corrective RAG workflow
• Utilizing LlamaCloud for AI applications