🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Logical Chain-of-Thought in LLMs Proposes a new neurosymbolic framework to improve zero-shot chain-of-thought reasoning in LLMs. Leverages principles from symbolic logic to verify and revise reasoning processes to improve the reasoning capabilities of LLMs. The think-verify-revise framework is a neat idea and it might be useful to deal with hallucination issues that appear in different scenarios especially those that require multi-step reasoning. Shows efficacy in domains like arithmetic, commonsense, and causal inference, among others. The effective use of knowledge by enhancing reasoning via logic intuitively makes sense but it sounds expensive given the inefficiencies of LLMs today. https://t.co/yWsDLOamgC

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "I share insights & advances in LLMs • Building @dair_ai • Prev: Meta AI, Galactica LLM, PapersWithCode, Elastic, PhD • Author of Prompting Guide (1.7M users)",
    "fast_followers_count": 0,
    "favourites_count": 23277,
    "followers_count": 160404,
    "friends_count": 418,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 2946,
    "location": "",
    "media_count": 1683,
    "name": "elvis",
    "normal_followers_count": 160404,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 9470,
    "translator_type": "regular",
    "url": "https://t.co/o4KzoHf52W",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1706711389803287019",
  "conversation_id": "1706711389803287019",
  "full_text": "Logical Chain-of-Thought in LLMs\n\nProposes a new neurosymbolic framework to improve zero-shot chain-of-thought reasoning in LLMs.\n\nLeverages principles from symbolic logic to verify and revise reasoning processes to improve the reasoning capabilities of LLMs.\n\nThe think-verify-revise framework is a neat idea and it might be useful to deal with hallucination issues that appear in different scenarios especially those that require multi-step reasoning. \n\nShows efficacy in domains like arithmetic, commonsense, and causal inference, among others. The effective use of knowledge by enhancing reasoning via logic intuitively makes sense but it sounds expensive given the inefficiencies of LLMs today.\n\nhttps://t.co/yWsDLOamgC",
  "reply_count": 4,
  "retweet_count": 64,
  "favorite_count": 315,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F69yLWJWwAIgD36.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1706711389803287019",
  "created_at": "2023-09-26T16:44:39.000Z",
  "#sort_index": "1706711389803287019",
  "view_count": 42486,
  "quote_count": 3,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/omarsar0/status/1706711389803287019"
}