🐦 Twitter Post Details

Viewing enriched Twitter post

@jerryjliu0

Dropping the first few videos on my knowledge assistant video series 👇 Step 1: Figure out how to define an agentic workflow on top of your standard RAG endpoints that can use LLMs to reason before your retrieval layer and afterwards. This lets you build more sophisticated research assistants that let you answer more complex questions than your standard QA chatbot. Intro video: https://t.co/FGe2tvGf05 Auto-retrieval: use LLMs to reason over vector dbs as tools and infer metadata filters. YT: https://t.co/yFoz7qZ7Vf Corrective RAG: use LLMs to reason over the output of retrieval and determine whether you’d want to do web search: https://t.co/lBc2CgqLww It uses LlamaCloud which you can signup for here: https://t.co/XYZmx5TFz8 If you don't have access to LlamaCloud yet, don't fret! You can always use our standard VectorStoreIndex abstraction for now.

YouTube

From RAG to Knowledge Assistants

This article discusses the potential of LLMs to answer complex questions using diverse data sources....

• LLMs can solve complex tasks across multiple data sources.

• The transition from RAG to Knowledge Assistants is significant.

YouTube

Advanced RAG: Auto-Retrieval (with LlamaCloud)

This guide demonstrates how to create an auto-retrieval pipeline using LlamaCloud on a research document corpus....

• Building an auto-retrieval pipeline

• Utilizing LlamaCloud retrievers

YouTube

Advanced RAG: Corrective RAG (with LlamaCloud)

This video guide demonstrates how to utilize LlamaCloud and Tavily AI to create a Corrective RAG workflow....

• Introduction to Corrective RAG workflow

• Utilizing LlamaCloud for AI applications

📊 Media Metadata

{
  "media": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/Ga7ZRgzbIAArgor.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    },
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/Ga7ZR7ibYAAZMFy.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "nlp": {
    "processed_at": "2025-08-06T12:42:00.880373",
    "sentiment": "positive",
    "topics": [
      "LLMs",
      "RAG",
      "AI Applications",
      "Prompt Engineering"
    ],
    "ner": {
      "entities": [
        {
          "entity": "knowledge assistant video series",
          "type": "series"
        },
        {
          "entity": "agentic workflow",
          "type": "concept"
        },
        {
          "entity": "RAG endpoints",
          "type": "technology"
        },
        {
          "entity": "LLMs",
          "type": "technology"
        },
        {
          "entity": "QA chatbot",
          "type": "application"
        }
      ]
    }
  },
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.542069",
  "import_source": "network_archive_import",
  "score_components": {
    "author": 0.09,
    "engagement": 0.12196663469151314,
    "quality": 0.2,
    "source": 0.12,
    "nlp": 0.1,
    "recency": 0.020000000000000004
  },
  "source_tagged_at": "2025-08-09T13:42:52.249692",
  "enriched": true,
  "enriched_at": "2025-08-09T13:42:52.249694",
  "enriched_links": [
    {
      "url": "https://t.co/FGe2tvGf05",
      "title": "From RAG to Knowledge Assistants",
      "description": "This article discusses the potential of LLMs to answer complex questions using diverse data sources.",
      "content_type": "article",
      "author": null,
      "site_name": "YouTube",
      "image_url": null,
      "key_points": [
        "LLMs can solve complex tasks across multiple data sources.",
        "The transition from RAG to Knowledge Assistants is significant.",
        "Improved question-answering capabilities are highlighted."
      ],
      "enriched_at": "2025-08-10T10:18:48.871399"
    },
    {
      "url": "https://t.co/yFoz7qZ7Vf",
      "title": "Advanced RAG: Auto-Retrieval (with LlamaCloud)",
      "description": "This guide demonstrates how to create an auto-retrieval pipeline using LlamaCloud on a research document corpus.",
      "content_type": "article",
      "author": null,
      "site_name": "YouTube",
      "image_url": null,
      "key_points": [
        "Building an auto-retrieval pipeline",
        "Utilizing LlamaCloud retrievers",
        "Focus on research document corpus"
      ],
      "enriched_at": "2025-08-10T10:18:53.242406"
    },
    {
      "url": "https://t.co/lBc2CgqLww",
      "title": "Advanced RAG: Corrective RAG (with LlamaCloud)",
      "description": "This video guide demonstrates how to utilize LlamaCloud and Tavily AI to create a Corrective RAG workflow.",
      "content_type": "article",
      "author": null,
      "site_name": "YouTube",
      "image_url": null,
      "key_points": [
        "Introduction to Corrective RAG workflow",
        "Utilizing LlamaCloud for AI applications",
        "Integration with Tavily AI platform"
      ],
      "enriched_at": "2025-08-10T10:18:59.476666"
    }
  ],
  "llm_enriched": true,
  "llm_enriched_at": "2025-08-10T10:18:59.476738",
  "original_structure": "had_media_only"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2011-09-07T22:54:31.000Z",
    "default_profile_image": false,
    "description": "co-founder/CEO @llama_index\n\nCareers: https://t.co/EUnMNmbCtx\nEnterprise: https://t.co/Ht5jwxSrQB",
    "fast_followers_count": 0,
    "favourites_count": 7173,
    "followers_count": 54387,
    "friends_count": 1364,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 1136,
    "location": "",
    "media_count": 1063,
    "name": "Jerry Liu",
    "normal_followers_count": 54387,
    "possibly_sensitive": false,
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1283610285031460864/1Q4zYhtb_normal.jpg",
    "screen_name": "jerryjliu0",
    "statuses_count": 5321,
    "translator_type": "none",
    "url": "https://t.co/YiIfjVlzb6",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "369777416"
  },
  "id": "1850656330119573734",
  "conversation_id": "1850656330119573734",
  "full_text": "Dropping the first few videos on my knowledge assistant video series 👇\n\nStep 1: Figure out how to define an agentic workflow on top of your standard RAG endpoints that can use LLMs to reason before your retrieval layer and afterwards.\n\nThis lets you build more sophisticated research assistants that let you answer more complex questions than your standard QA chatbot.\n\nIntro video: https://t.co/FGe2tvGf05\nAuto-retrieval: use LLMs to reason over vector dbs as tools and infer metadata filters. YT: https://t.co/yFoz7qZ7Vf\nCorrective RAG: use LLMs to reason over the output of retrieval and determine whether you’d want to do web search: https://t.co/lBc2CgqLww\n\nIt uses LlamaCloud which you can signup for here: https://t.co/XYZmx5TFz8\n\nIf you don't have access to LlamaCloud yet, don't fret! You can always use our standard VectorStoreIndex abstraction for now.",
  "reply_count": 0,
  "retweet_count": 36,
  "favorite_count": 202,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/Ga7ZRgzbIAArgor.jpg",
      "type": "photo"
    },
    {
      "media_url": "https://pbs.twimg.com/media/Ga7ZR7ibYAAZMFy.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/jerryjliu0/status/1850656330119573734",
  "created_at": "2024-10-27T21:50:27.000Z",
  "#sort_index": "1850656330119573734",
  "view_count": 26387,
  "quote_count": 4,
  "is_quote_tweet": true,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "quoted_tweet": {
    "user": {
      "created_at": "2022-12-18T00:52:44.000Z",
      "default_profile_image": false,
      "description": "Build LLM agents over your data\n\nGithub: https://t.co/HC19j7vMwc\nDocs: https://t.co/QInqg2zksh\nDiscord: https://t.co/3ktq3zzYII",
      "fast_followers_count": 0,
      "favourites_count": 1261,
      "followers_count": 82611,
      "friends_count": 26,
      "has_custom_timelines": false,
      "is_translator": false,
      "listed_count": 1366,
      "location": "",
      "media_count": 1375,
      "name": "LlamaIndex 🦙",
      "normal_followers_count": 82611,
      "possibly_sensitive": false,
      "profile_banner_url": "https://pbs.twimg.com/profile_banners/1604278358296055808/1696908553",
      "profile_image_url_https": "https://pbs.twimg.com/profile_images/1623505166996742144/n-PNQGgd_normal.jpg",
      "screen_name": "llama_index",
      "statuses_count": 2997,
      "translator_type": "none",
      "url": "https://t.co/epzefqQqZx",
      "verified": true,
      "withheld_in_countries": [],
      "id_str": "1604278358296055808"
    },
    "id": "1850572786521256392",
    "conversation_id": "1850572786521256392",
    "full_text": "We’re publishing 2 full-length tutorial videos showing you how to implement various agentic RAG techniques - adding LLM layers to reason over inputs and post process the outputs.\n\nAuto-retrieval: use LLMs to reason over vector dbs as tools and infer metadata filters. YT: https://t.co/Iit3OiJFhe\nCorrective RAG: use LLMs to reason over the output of retrieval and determine whether you’d want to do web search: https://t.co/Jd6TLuEShS\n\nStack:\n- Use LlamaCloud as the core knowledge management layer for indexing/retrieval. Setup a pipeline in minutes\n- Use @llama_index workflows to define event-driven orchestration\n\nSignup to LlamaCloud, we’re letting more people off the waitlist: https://t.co/yQGTiRSNvj\nCome talk to us if you’re an enterprise: https://t.co/ek65coieav",
    "reply_count": 1,
    "retweet_count": 53,
    "favorite_count": 264,
    "hashtags": [],
    "symbols": [],
    "user_mentions": [],
    "urls": [],
    "media": [
      {
        "media_url": "https://pbs.twimg.com/media/Ga6OGQpbAAA8nOb.jpg",
        "type": "photo"
      },
      {
        "media_url": "https://pbs.twimg.com/media/Ga6OGq3bUAAtsoj.jpg",
        "type": "photo"
      }
    ],
    "url": "https://twitter.com/llama_index/status/1850572786521256392",
    "created_at": "2024-10-27T16:18:28.000Z",
    "#sort_index": "1850656330119573800",
    "view_count": 42733,
    "quote_count": 2,
    "is_quote_tweet": false,
    "is_retweet": false,
    "is_pinned": false,
    "is_truncated": true
  },
  "startUrl": "https://x.com/jerryjliu0/status/1850656330119573734"
}