🐦 Twitter Post Details

Viewing enriched Twitter post

@iScienceLuvr

Training Large Language Models to Reason in a Continuous Latent Space Introduces a new paradigm for LLM reasoning called Chain of Continuous Thought (COCONUT) Extremely simple change: instead of mapping between hidden states and language tokens using the LLM head and embedding layer, you directly feed the last hidden state (a continuous thought) as the input embedding for the next token. The system can be optimized end-to-end by gradient descent, as continuous thoughts are fully differentiable.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1866353795502158163/media_0.jpg",
      "type": "photo",
      "original_url": "https://pbs.twimg.com/media/GeadrgBacAAHH50.jpg",
      "download_date": "2025-08-13T06:02:20.078393",
      "stored_in_supabase": true
    }
  ],
  "conversion_date": "2025-08-13T00:40:42.850328",
  "format_converted": true,
  "original_structure": "had_media_only"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2011-12-20T03:45:50.000Z",
    "default_profile_image": false,
    "description": "PhD at 19 |\nFounder and CEO at @MedARC_AI |\nResearch Director at @StabilityAI | \n@kaggle Notebooks GM |\nBiomed. engineer @ 14 |\nTEDx talk➡https://t.co/xPxwKTpz0D",
    "fast_followers_count": 0,
    "favourites_count": 83827,
    "followers_count": 63880,
    "friends_count": 1100,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 974,
    "location": "",
    "media_count": 1828,
    "name": "Tanishq Mathew Abraham, Ph.D.",
    "normal_followers_count": 63880,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/441465751/1675968078",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1553508977735962624/nnlSwBmu_normal.jpg",
    "screen_name": "iScienceLuvr",
    "statuses_count": 14515,
    "translator_type": "none",
    "url": "https://t.co/nNzCz2VVd1",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "441465751"
  },
  "id": "1866353795502158163",
  "conversation_id": "1866353795502158163",
  "full_text": "Training Large Language Models to Reason in a Continuous Latent Space\n\nIntroduces a new paradigm for LLM reasoning called Chain of Continuous Thought (COCONUT)\n\nExtremely simple change: instead of mapping between hidden states and language tokens using the LLM head and embedding layer, you directly feed the last hidden state (a continuous thought) as the input embedding for the next token.\n\nThe system can be optimized end-to-end by gradient descent, as continuous thoughts are fully differentiable.",
  "reply_count": 50,
  "retweet_count": 297,
  "favorite_count": 1889,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GeadrgBacAAHH50.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/iScienceLuvr/status/1866353795502158163",
  "created_at": "2024-12-10T05:26:34.000Z",
  "#sort_index": "1866353795502158163",
  "view_count": 319770,
  "quote_count": 87,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/iscienceluvr/status/1866353795502158163"
}