🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

Training LLMs to Reason in a Continuous Latent Space Meta presents Coconut (Chain of Continuous Thought), a novel paradigm that enables LLMs to reason in continuous latent space rather than natural language. Coconut takes the last hidden state of the LLM as the reasoning state and feeds it back to the LLM as the subsequent input embedding directly in the continuous space. This leads to what the authors refer to as "continuous thought" which augments an LLM's capability on reasoning tasks. It demonstrates improved performance on complex reasoning tasks through emergent breadth-first search capabilities.

🔧 Raw API Response

{
  "user": {
    "created_at": "2015-09-04T12:59:26.000Z",
    "default_profile_image": false,
    "description": "Building with AI Agents @dair_ai • Prev: Meta AI, Elastic, Galactica LLM, PhD • I also teach how to build with LLMs, RAG & AI Agents ⬇️",
    "fast_followers_count": 0,
    "favourites_count": 27931,
    "followers_count": 216715,
    "friends_count": 532,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3688,
    "location": "",
    "media_count": 2656,
    "name": "elvis",
    "normal_followers_count": 216715,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "screen_name": "omarsar0",
    "statuses_count": 12439,
    "translator_type": "regular",
    "url": "https://t.co/JBU5beHQNs",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "3448284313"
  },
  "id": "1866518791733342563",
  "conversation_id": "1866518791733342563",
  "full_text": "Training LLMs to Reason in a Continuous Latent Space\n\nMeta presents Coconut (Chain of Continuous Thought), a novel paradigm that enables LLMs to reason in continuous latent space rather than natural language.\n\nCoconut takes the last hidden state of the LLM as the reasoning state and feeds it back to the LLM as the subsequent input embedding directly in the continuous space. \n\nThis leads to what the authors refer to as \"continuous thought\" which augments an LLM's capability on reasoning tasks. \n\nIt demonstrates improved performance on complex reasoning tasks through emergent breadth-first search capabilities.",
  "reply_count": 17,
  "retweet_count": 95,
  "favorite_count": 423,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GeczerBWoAAXeRq.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/omarsar0/status/1866518791733342563",
  "created_at": "2024-12-10T16:22:12.000Z",
  "#sort_index": "1866518791733342563",
  "view_count": 71646,
  "quote_count": 9,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/omarsar0/status/1866518791733342563"
}