🐦 Twitter Post Details

Viewing enriched Twitter post

@ArtificialAnlys

MBZUAI’s Institute of Foundation Models has released K2-V2, a 70B reasoning model that is tied for #1 in our Openness Index, and is the first model on our leaderboards from the UAE 📖 Tied leader in Openness: K2-V2 joins OLMo 3 32B Think at the top of the Artificial Analysis Openness Index - our newly released, standardized, independently assessed measure of AI model openness across availability and transparency. MBZUAI went beyond open access and licensing of the model weights - they provide full access to pre- and post-training data. They also publish training methodology and code with a permissive Apache license allowing free use for any purpose. This makes K2-V2 a valuable contribution to the open source community and allows more effective fine-tuning. See links below! 🧠 Strong medium-sized (40-150B) open weights model: At 70B, K2-V2 scores 46 on our Intelligence Index with its High reasoning mode. This puts it above Llama Nemotron Super 49B v1.5 but below Qwen3 Next 80B A3B. The model has a relative strength in instruction following with a score of 60% in IFBench 🇦🇪 First UAE entrant on our leaderboards: In a sea of largely US and Chinese models, K2-V2 stands out as the first representation of the UAE in our leaderboards, and the second entrant from the Middle East after Israel’s AI21 labs. K2-V2 is the first MBZUAI model we have benchmarked, but the lab has previously released models with a particular focus on language representation including Egyptian Arabic and Hindi 📊 Lower reasoning modes reduce token use & hallucination: K2-V2 has 3 reasoning modes, with the High reasoning mode using a substantial ~130M tokens to complete our Intelligence Index. However, the Medium mode reduces token usage by ~6x with only a 6pt drop in our Intelligence Index. Interestingly, lower reasoning modes score better in our knowledge and hallucination index, AA-Omniscience, due to a reduced tendency to hallucinate

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2001686748104192462/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2001686748104192462/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-18T18:38:32.238820",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2001686748104192462",
  "url": "https://x.com/ArtificialAnlys/status/2001686748104192462",
  "twitterUrl": "https://twitter.com/ArtificialAnlys/status/2001686748104192462",
  "text": "MBZUAI’s Institute of Foundation Models has released K2-V2, a 70B reasoning model that is tied for #1 in our Openness Index, and is the first model on our leaderboards from the UAE\n\n📖 Tied leader in Openness: K2-V2 joins OLMo 3 32B Think at the top of the Artificial Analysis Openness Index - our newly released, standardized, independently assessed measure of AI model openness across availability and transparency. MBZUAI went beyond open access and licensing of the model weights - they provide full access to pre- and post-training data. They also publish training methodology and code with a permissive Apache license allowing free use for any purpose. This makes K2-V2 a valuable contribution to the open source community and allows more effective fine-tuning. See links below!\n\n🧠 Strong medium-sized (40-150B) open weights model: At 70B, K2-V2 scores 46 on our Intelligence Index with its High reasoning mode. This puts it above Llama Nemotron Super 49B v1.5 but below Qwen3 Next 80B A3B. The model has a relative strength in instruction following with a score of 60% in IFBench\n\n🇦🇪 First UAE entrant on our leaderboards: In a sea of largely US and Chinese models, K2-V2 stands out as the first representation of the UAE in our leaderboards, and the second entrant from the Middle East after Israel’s AI21 labs. K2-V2 is the first MBZUAI model we have benchmarked, but the lab has previously released models with a particular focus on language representation including Egyptian Arabic and Hindi\n\n📊 Lower reasoning modes reduce token use & hallucination: K2-V2 has 3 reasoning modes, with the High reasoning mode using a substantial ~130M tokens to complete our Intelligence Index. However, the Medium mode reduces token usage by ~6x with only a 6pt drop in our Intelligence Index. Interestingly, lower reasoning modes score better in our knowledge and hallucination index, AA-Omniscience, due to a reduced tendency to hallucinate",
  "source": "Twitter for iPhone",
  "retweetCount": 11,
  "replyCount": 1,
  "likeCount": 50,
  "quoteCount": 2,
  "viewCount": 4573,
  "createdAt": "Thu Dec 18 16:11:24 +0000 2025",
  "lang": "en",
  "bookmarkCount": 7,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2001686748104192462",
  "displayTextRange": [
    0,
    276
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "ArtificialAnlys",
    "url": "https://x.com/ArtificialAnlys",
    "twitterUrl": "https://twitter.com/ArtificialAnlys",
    "id": "1743487864934162432",
    "name": "Artificial Analysis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1810946341511766016/3mg9KIaQ_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1743487864934162432/1704519394",
    "description": "Independent analysis of AI models and hosting providers - choose the best model and API provider for your use-case",
    "location": "San Francisco",
    "followers": 70733,
    "following": 599,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Sat Jan 06 04:21:21 +0000 2024",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {
        "urls": [
          {
            "display_url": "artificialanalysis.ai",
            "expanded_url": "http://artificialanalysis.ai/",
            "url": "https://t.co/hEm5Kv0ktE",
            "indices": [
              0,
              23
            ]
          }
        ]
      }
    },
    "fastFollowersCount": 0,
    "favouritesCount": 1936,
    "hasCustomTimelines": false,
    "isTranslator": false,
    "mediaCount": 1104,
    "statusesCount": 1755,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1809670091778207901"
    ],
    "profile_bio": {
      "description": "Independent analysis of AI models and hosting providers - choose the best model and API provider for your use-case"
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.x.com/GFoHhDMqOR",
        "expanded_url": "https://x.com/ArtificialAnlys/status/2001686748104192462/photo/1",
        "id_str": "2001684359372509191",
        "indices": [
          277,
          300
        ],
        "media_key": "3_2001684359372509191",
        "media_url_https": "https://pbs.twimg.com/media/G8dpUcjakAc6Uwx.jpg",
        "type": "photo",
        "url": "https://t.co/GFoHhDMqOR",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "medium": {
            "faces": []
          },
          "small": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "sizes": {
          "large": {
            "h": 1252,
            "w": 2048,
            "resize": "fit"
          },
          "medium": {
            "h": 733,
            "w": 1200,
            "resize": "fit"
          },
          "small": {
            "h": 416,
            "w": 680,
            "resize": "fit"
          },
          "thumb": {
            "h": 150,
            "w": 150,
            "resize": "crop"
          }
        },
        "original_info": {
          "height": 2503,
          "width": 4096,
          "focus_rects": [
            {
              "x": 0,
              "y": 0,
              "w": 4096,
              "h": 2294
            },
            {
              "x": 1593,
              "y": 0,
              "w": 2503,
              "h": 2503
            },
            {
              "x": 1900,
              "y": 0,
              "w": 2196,
              "h": 2503
            },
            {
              "x": 2844,
              "y": 0,
              "w": 1252,
              "h": 2503
            },
            {
              "x": 0,
              "y": 0,
              "w": 4096,
              "h": 2503
            }
          ]
        },
        "allow_download_status": {
          "allow_download": true
        },
        "media_results": {
          "result": {
            "media_key": "3_2001684359372509191"
          }
        }
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "article": null
}