🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Meta Chain-of-Thought Prompting with LLMs Proposes a generalizable chain-of-thought (Meta-CoT) prompting method in mixed-task scenarios where the type of input questions is unknown. The core idea is to bridge the gap between performance and generalization when using the CoT prompting method with LLMs. Meta-CoT is comprised of three phases: 1) Scenario identification: samples distinct questions as in-context learning demonstrations to help automatically categorize scenarios based on input questions 2) Demonstration selection: constructs diverse demonstrations from a pool based on the scenario obtained in the first phase 3) Answer derivation: performs a final answer inference on the input question using previously fetched demonstrations Lots of interesting insights/results to analyze from the paper but it seems that the scenario identification phase plays a key role in generalization and "potentially arouses the self-determination ability of LLMs without the need for manual intervention." MetaCoT "achieves the state-of-the-art result on SVAMP (93.7%) without any additional program-aided methods. Moreover, Meta-CoT achieves impressive performance on GSM8K (93.6%) even without in-context demonstrations from GSM8K itself." paper: https://t.co/w16AFruSFI code: https://t.co/48g37kiUc1

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "I share insights & advances in LLMs • Building @dair_ai • Prev: Meta AI, Galactica LLM, PapersWithCode, Elastic, PhD • Author of Prompting Guide (1.8M users)",
    "fast_followers_count": 0,
    "favourites_count": 23315,
    "followers_count": 162179,
    "friends_count": 427,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3000,
    "location": "",
    "media_count": 1703,
    "name": "elvis",
    "normal_followers_count": 162179,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 9511,
    "translator_type": "regular",
    "url": "https://t.co/o4KzoHf52W",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1712835499256090972",
  "conversation_id": "1712835499256090972",
  "full_text": "Meta Chain-of-Thought Prompting with LLMs\n\nProposes a generalizable chain-of-thought (Meta-CoT) prompting method in mixed-task scenarios where the type of input questions is unknown. \n\nThe core idea is to bridge the gap between performance and generalization when using the CoT prompting method with LLMs. \n\nMeta-CoT is comprised of three phases:\n\n1) Scenario identification: samples distinct questions as in-context learning demonstrations to help automatically categorize scenarios based on input questions\n2) Demonstration selection: constructs diverse demonstrations from a pool based on the scenario obtained in the first phase\n3) Answer derivation: performs a final answer inference on the input question using previously fetched demonstrations\n\nLots of interesting insights/results to analyze from the paper but it seems that the scenario identification phase plays a key role in generalization and \"potentially arouses the self-determination ability of LLMs without the need for manual intervention.\"\n\nMetaCoT \"achieves the state-of-the-art result on SVAMP (93.7%) without any additional program-aided methods. Moreover, Meta-CoT achieves impressive performance on GSM8K (93.6%) even without in-context demonstrations from GSM8K itself.\"\n\npaper: https://t.co/w16AFruSFI\ncode: https://t.co/48g37kiUc1",
  "reply_count": 5,
  "retweet_count": 84,
  "favorite_count": 382,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F8UwsqtWYAADN-v.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1712835499256090972",
  "created_at": "2023-10-13T14:19:40.000Z",
  "#sort_index": "1712835499256090972",
  "view_count": 72444,
  "quote_count": 3,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/omarsar0/status/1712835499256090972"
}