@jerryjliu0
Here are seven full ways to query knowledge graphs with LLMs 👇: 1️⃣ Keyword-based entity retrieval: extract keywords to look up relevant KG entities, pull in linked text chunks. Optionally explore relationships to pull in more context. 2️⃣ Vector-based entity retrieval: lookup KG entities with vector similarity, pull in linked text chunks. Optionally explore relationships. 3️⃣ Hybrid entity retrieval: combination of (1) and (2) with deduplication 4️⃣ Raw vector-index retrieval: remove relationships and represent entities/documents in a flat vector store. 5️⃣ Combine raw vector retrieval with KG retrieval: lookup text chunks via vector similarity or entity retrieval (any one of 1,2,3). Combine/dedup the results. 6️⃣ Run text-to-cypher: Use LLM to generate Cypher queries, can work against any knowledge graph. 7️⃣ Graph RAG (proposed by @wey_gu). Similar to 1,2,3 but over any KG - there doesn’t have to be associated textchunks! Sample results are given below (left 🖼️). Full pros/cons of each approach are attached in the (right) diagram 🖼️ This is one of the most comprehensive treatments of RAG + KG’s in @llama_index that I’ve seen - @wenqi_glantz has done it again! And it covers a cool use case of constructing a knowledge graph over the Philadelphia Phillies for baseball-related queries. Of course, a lot of this builds upon core foundational work by @wey_gu. Article: https://t.co/vRTMDQhDM3