@llama_index
We’re publishing 2 full-length tutorial videos showing you how to implement various agentic RAG techniques - adding LLM layers to reason over inputs and post process the outputs. Auto-retrieval: use LLMs to reason over vector dbs as tools and infer metadata filters. YT: https://t.co/Iit3OiJFhe Corrective RAG: use LLMs to reason over the output of retrieval and determine whether you’d want to do web search: https://t.co/Jd6TLuEShS Stack: - Use LlamaCloud as the core knowledge management layer for indexing/retrieval. Setup a pipeline in minutes - Use @llama_index workflows to define event-driven orchestration Signup to LlamaCloud, we’re letting more people off the waitlist: https://t.co/yQGTiRSNvj Come talk to us if you’re an enterprise: https://t.co/ek65coieav