🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Multi-expert Prompting with LLMs Multi-expert Prompting improves LLM responses by simulating multiple experts and aggregating their responses. Multi-expert Prompting guides an LLM to fulfill input instructions by simulating multiple experts and selecting the best response among individual and aggregated views. It achieves a new state-of-the-art on TruthfulQA-Generation with ChatGPT, surpassing the current SOTA of 87.97%. It also improves performance across factuality and usefulness while reducing toxicity and hurtfulness. This is a very nice prompting approach that has huge potential when building agentic workflows. Prompt examples are shared in the paper.

Media 1

📊 Media Metadata

{
  "data": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/GbgyDxGasAA9pQH.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.551493",
  "import_source": "network_archive_import",
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1853286452227899851/media_0.jpg?",
      "filename": "media_0.jpg"
    }
  ],
  "reprocessed_at": "2025-08-12T15:25:53.759444",
  "reprocessed_reason": "missing_media_array"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27933,
    "followers_count": 216712,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3688,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216712,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1853286452227899851",
  "conversation_id": "1853286452227899851",
  "full_text": "Multi-expert Prompting with LLMs\n\nMulti-expert Prompting improves LLM responses by simulating multiple experts and aggregating their responses.\n\nMulti-expert Prompting guides an LLM to fulfill input instructions by simulating multiple experts and selecting the best response among individual and aggregated views. \n\nIt achieves a new state-of-the-art on TruthfulQA-Generation with ChatGPT, surpassing the current SOTA of 87.97%.\n\nIt also improves performance across factuality and usefulness while reducing toxicity and hurtfulness. \n\nThis is a very nice prompting approach that has huge potential when building agentic workflows. Prompt examples are shared in the paper.",
  "reply_count": 9,
  "retweet_count": 97,
  "favorite_count": 491,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GbgyDxGasAA9pQH.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1853286452227899851",
  "created_at": "2024-11-04T04:01:37.000Z",
  "#sort_index": "1853286452227899851",
  "view_count": 46731,
  "quote_count": 10,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1853286452227899851"
}