🐦 Twitter Post Details

Viewing enriched Twitter post

@ArtificialAnlys

NVIDIA has just released Nemotron 3 Nano, a ~30B MoE model that scores 52 on the Artificial Analysis Intelligence Index with just ~3B active parameters Hybrid Mamba-Transformer architecture: Nemotron 3 Nano combines the hybrid Mamba-Transformer approach @NVIDIAAI has used on previous Nemotron models with a moderate-sparsity MoE architecture, enabling highly efficient inference, particularly at longer sequence lengths Small-model improvements: with 31.6B total and 3.6B active parameters, Nemotron 3 Nano scores 52 on our Intelligence Index, in line with OpenAI’s gpt-oss-20b (high). This represents a +6 point lead on the similarly-sized Qwen3 30B A3B 2507 and +15 improvement on NVIDIA’s previous Nemotron Nano 9B V2 (a dense model) High openness: Nemotron 3 Nano follows other recent NVIDIA models in open licensing and releases of data and methodology for the community to use and replicate - it scores an 67 on the Artificial Analysis Openness Index, in line with previous Nemotron Nano models Key model details: ➤ 1 million token context window, with text only support ➤ Supports reasoning and non-reasoning modes ➤ Released under the NVIDIA Open Model License; the model is freely available for commercial use or training of derivative models ➤ On launch, the model is being made available with a range of serverless inference providers including @basetenco, @DeepInfra, @FireworksAI_HQ, @togethercompute and @friendliai, and it is available now on Hugging Face for local inference or self-deployment See below for our full analysis and key announcement links from NVIDIA 👇

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000602570092675402/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000602570092675402/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-15T20:13:42.011170",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2000602570092675402",
  "url": "https://x.com/ArtificialAnlys/status/2000602570092675402",
  "twitterUrl": "https://twitter.com/ArtificialAnlys/status/2000602570092675402",
  "text": "NVIDIA has just released Nemotron 3 Nano, a ~30B MoE model that scores 52 on the Artificial Analysis Intelligence Index with just ~3B active parameters\n\nHybrid Mamba-Transformer architecture: Nemotron 3 Nano combines the hybrid Mamba-Transformer approach @NVIDIAAI has used on previous Nemotron models with a moderate-sparsity MoE architecture, enabling highly efficient inference, particularly at longer sequence lengths\n\nSmall-model improvements: with 31.6B total and 3.6B active parameters, Nemotron 3 Nano scores 52 on our Intelligence Index, in line with OpenAI’s gpt-oss-20b (high). This represents a +6 point lead on the similarly-sized Qwen3 30B A3B 2507 and +15 improvement on NVIDIA’s previous Nemotron Nano 9B V2 (a dense model)\n\nHigh openness: Nemotron 3 Nano follows other recent NVIDIA models in open licensing and releases of data and methodology for the community to use and replicate - it scores an 67 on the Artificial Analysis Openness Index, in line with previous Nemotron Nano models\n\nKey model details:\n➤ 1 million token context window, with text only support\n\n➤ Supports reasoning and non-reasoning modes\n\n➤ Released under the NVIDIA Open Model License; the model is freely available for commercial use or training of derivative models\n\n➤ On launch, the model is being made available with a range of serverless inference providers including @basetenco, @DeepInfra, @FireworksAI_HQ, @togethercompute and @friendliai, and it is available now on Hugging Face for local inference or self-deployment\n\nSee below for our full analysis and key announcement links from NVIDIA 👇",
  "source": "Twitter for iPhone",
  "retweetCount": 25,
  "replyCount": 5,
  "likeCount": 129,
  "quoteCount": 10,
  "viewCount": 33314,
  "createdAt": "Mon Dec 15 16:23:16 +0000 2025",
  "lang": "en",
  "bookmarkCount": 21,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2000602570092675402",
  "displayTextRange": [
    0,
    277
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "ArtificialAnlys",
    "url": "https://x.com/ArtificialAnlys",
    "twitterUrl": "https://twitter.com/ArtificialAnlys",
    "id": "1743487864934162432",
    "name": "Artificial Analysis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1810946341511766016/3mg9KIaQ_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1743487864934162432/1704519394",
    "description": "",
    "location": "San Francisco",
    "followers": 69895,
    "following": 597,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Sat Jan 06 04:21:21 +0000 2024",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 1923,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 1066,
    "statusesCount": 1709,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1809670091778207901"
    ],
    "profile_bio": {
      "description": "Independent analysis of AI models and hosting providers - choose the best model and API provider for your use-case",
      "entities": {
        "description": {},
        "url": {
          "urls": [
            {
              "display_url": "artificialanalysis.ai",
              "expanded_url": "http://artificialanalysis.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/hEm5Kv0ktE"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/sStgiDZTYT",
        "expanded_url": "https://twitter.com/ArtificialAnlys/status/2000602570092675402/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "h": 100,
                "w": 100,
                "x": 349,
                "y": 292
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "h": 100,
                "w": 100,
                "x": 349,
                "y": 292
              }
            ]
          }
        },
        "id_str": "2000601312183115776",
        "indices": [
          278,
          301
        ],
        "media_key": "3_2000601312183115776",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvDkEs12uAACgACG8ORcBcbgUoAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG8OQSzXa4AAKAAIbw5FwFxuBSgAA",
            "media_key": "3_2000601312183115776"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G8OQSzXa4AAPKLE.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 680,
              "w": 1214,
              "x": 730,
              "y": 0
            },
            {
              "h": 680,
              "w": 680,
              "x": 1264,
              "y": 0
            },
            {
              "h": 680,
              "w": 596,
              "x": 1348,
              "y": 0
            },
            {
              "h": 680,
              "w": 340,
              "x": 1530,
              "y": 0
            },
            {
              "h": 680,
              "w": 1944,
              "x": 0,
              "y": 0
            }
          ],
          "height": 680,
          "width": 1944
        },
        "sizes": {
          "large": {
            "h": 680,
            "w": 1944
          }
        },
        "type": "photo",
        "url": "https://t.co/sStgiDZTYT"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "user_mentions": [
      {
        "id_str": "740238495952736256",
        "indices": [
          255,
          264
        ],
        "name": "NVIDIA AI",
        "screen_name": "NVIDIAAI"
      },
      {
        "id_str": "1375579341178818561",
        "indices": [
          1364,
          1374
        ],
        "name": "Baseten",
        "screen_name": "basetenco"
      },
      {
        "id_str": "1623086169759318017",
        "indices": [
          1376,
          1386
        ],
        "name": "DeepInfra",
        "screen_name": "DeepInfra"
      },
      {
        "id_str": "1575886662957047812",
        "indices": [
          1388,
          1403
        ],
        "name": "Fireworks AI",
        "screen_name": "FireworksAI_HQ"
      },
      {
        "id_str": "1592266692528197632",
        "indices": [
          1405,
          1421
        ],
        "name": "Together AI",
        "screen_name": "togethercompute"
      },
      {
        "id_str": "1517294112399306752",
        "indices": [
          1426,
          1437
        ],
        "name": "friendliai",
        "screen_name": "friendliai"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}