🐦 Twitter Post Details

Viewing enriched Twitter post

@PavloMolchanov

What if one training run could produce multiple high-quality LLMs for free? 🤔Turns out it can. ❗️We’re releasing Nemotron-Elastic-12B - a single model that gives you 6B/9B/12B variants without extra training cost. ✨ Highlights: - Many-in-one model: Zero-shot slicing gives you 6B, 9B, 12B from the same checkpoint. No retraining. No extra runs. - Constant training cost: Traditional pipelines pay linearly for each size. Elastic cuts this to ~constant — 7.2× token savings for the 6B/9B/12B family. - Constant deployment memory: All variants fit in 24GB (just the 12B footprint). 2.25× reduction vs storing separate checkpoints. - Great reasoning: Hybrid Mamba-2 + Transformer architecture, competitive with same-size models on MATH-500, AIME, GPQA, LCB, etc. - Perfect for edge: Pick the right model size on-device without juggling multiple checkpoints or retraining. Elastic models = less compute, less memory, higher accuracy — and all from a single model. 📖 Read the full technical paper: https://t.co/DrxZCyvvjX 🤗 Explore the model: https://t.co/3PnZudn5PW

Media 1
Media 2

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1991984496296792326/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1991984496296792326/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    },
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1991984496296792326/media_1.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1991984496296792326/media_1.jpg?",
      "type": "photo",
      "filename": "media_1.jpg"
    }
  ],
  "processed_at": "2025-11-27T20:39:43.349436",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "1991984496296792326",
  "url": "https://x.com/PavloMolchanov/status/1991984496296792326",
  "twitterUrl": "https://twitter.com/PavloMolchanov/status/1991984496296792326",
  "text": "What if one training run could produce multiple high-quality LLMs for free? 🤔Turns out it can. \n\n❗️We’re releasing Nemotron-Elastic-12B - a single model that gives you 6B/9B/12B variants without extra training cost.\n\n✨ Highlights:\n- Many-in-one model: \nZero-shot slicing gives you 6B, 9B, 12B from the same checkpoint. No retraining. No extra runs.\n\n- Constant training cost: \nTraditional pipelines pay linearly for each size. Elastic cuts this to ~constant — 7.2× token savings for the 6B/9B/12B family.\n\n- Constant deployment memory: \nAll variants fit in 24GB (just the 12B footprint). 2.25× reduction vs storing separate checkpoints.\n\n- Great reasoning: \nHybrid Mamba-2 + Transformer architecture, competitive with same-size models on MATH-500, AIME, GPQA, LCB, etc.\n\n- Perfect for edge: \nPick the right model size on-device without juggling multiple checkpoints or retraining.\n\nElastic models = less compute, less memory, higher accuracy — and all from a single model.\n\n📖 Read the full technical paper: https://t.co/DrxZCyvvjX\n🤗 Explore the model: https://t.co/3PnZudn5PW",
  "source": "Twitter for iPhone",
  "retweetCount": 52,
  "replyCount": 6,
  "likeCount": 298,
  "quoteCount": 6,
  "viewCount": 31799,
  "createdAt": "Fri Nov 21 21:38:07 +0000 2025",
  "lang": "en",
  "bookmarkCount": 190,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1991984496296792326",
  "displayTextRange": [
    0,
    277
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "PavloMolchanov",
    "url": "https://x.com/PavloMolchanov",
    "twitterUrl": "https://twitter.com/PavloMolchanov",
    "id": "2368348172",
    "name": "Pavlo Molchanov",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1207757973226786816/KQVdX0w3_normal.jpg",
    "coverPicture": "",
    "description": "",
    "location": "Bay Area",
    "followers": 3477,
    "following": 429,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Sun Mar 02 05:15:30 +0000 2014",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 1729,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 74,
    "statusesCount": 336,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [],
    "profile_bio": {
      "description": "Director of Research @NVIDIA",
      "entities": {
        "description": {
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                21,
                28
              ],
              "name": "",
              "screen_name": "NVIDIA"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "pmolchanov.com",
              "expanded_url": "https://pmolchanov.com",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/AFb7u7SjBF"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/l4cg7gsAYH",
        "expanded_url": "https://twitter.com/PavloMolchanov/status/1991984496296792326/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "1991980541852467200",
        "indices": [
          278,
          301
        ],
        "media_key": "3_1991980541852467200",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARuk77+0m0AACgACG6TzWGvbEQYAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG6Tvv7SbQAAKAAIbpPNYa9sRBgAA",
            "media_key": "3_1991980541852467200"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G6Tvv7SbQAAQzae.png",
        "original_info": {
          "focus_rects": [
            {
              "h": 913,
              "w": 1630,
              "x": 0,
              "y": 0
            },
            {
              "h": 1344,
              "w": 1344,
              "x": 0,
              "y": 0
            },
            {
              "h": 1344,
              "w": 1179,
              "x": 0,
              "y": 0
            },
            {
              "h": 1344,
              "w": 672,
              "x": 112,
              "y": 0
            },
            {
              "h": 1344,
              "w": 1630,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1344,
          "width": 1630
        },
        "sizes": {
          "large": {
            "h": 1344,
            "w": 1630
          }
        },
        "type": "photo",
        "url": "https://t.co/l4cg7gsAYH"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "arxiv.org/pdf/2511.16664…",
        "expanded_url": "https://arxiv.org/pdf/2511.16664v1",
        "indices": [
          1007,
          1030
        ],
        "url": "https://t.co/DrxZCyvvjX"
      },
      {
        "display_url": "huggingface.co/nvidia/Nemotro…",
        "expanded_url": "https://huggingface.co/nvidia/Nemotron-Elastic-12B",
        "indices": [
          1052,
          1075
        ],
        "url": "https://t.co/3PnZudn5PW"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}