🐦 Twitter Post Details

Viewing enriched Twitter post

@omarsar0

New benchmark from Google Research. Models get better at benchmarks, but do they actually get more factual? Previous evaluations focused on narrow slices: grounding to documents, answering from memory, or using search. A model excelling at one often fails at another. This new research introduces the FACTS Leaderboard, a comprehensive suite that measures factuality across four distinct dimensions: - FACTS Multimodal tests visual grounding combined with world knowledge on ~1,500 image-based questions. - FACTS Parametric assesses closed-book factoid recall using 2,104 adversarially-sampled questions that stumped open-weight models. - FACTS Search evaluates information-seeking with web tools across 1,884 queries including multi-hop reasoning. - FACTS Grounding v2 tests whether long-form responses stay faithful to provided documents. The aggregate FACTS Score averages performance across all four. Results: Gemini 3 Pro leads with 68.8% overall. Gemini 2.5 Pro follows at 62.1%, then GPT-5 at 61.8%. But the sub-scores tell a different story. Claude models are precision-oriented, achieving high no-contradiction rates but hedging frequently on parametric questions. Claude 4 Sonnet doesn't attempt 45.1% of parametric queries. GPT models show higher coverage but more contradictions. On multimodal, even the best models only reach ~47% accuracy when requiring both complete coverage and zero contradictions. On parametric knowledge, the spread is enormous: Gemini 3 Pro hits 76.4% while GPT-5 mini manages just 16.0%. The benchmark maintains both public and private splits to prevent overfitting. All evaluation runs through Kaggle with standardized search tools for fair comparison. A single factuality number hides crucial behavioral differences. Some models guess aggressively, others hedge conservatively. This suite exposes those tradeoffs across the contexts where factuality actually matters. Paper: https://t.co/TCHOSGlQKs Learn how to evaluate and build effective AI agents in our academy: https://t.co/JBU5beIoD0

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000935220049273303/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2000935220049273303/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-16T14:43:46.308549",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2000935220049273303",
  "url": "https://x.com/omarsar0/status/2000935220049273303",
  "twitterUrl": "https://twitter.com/omarsar0/status/2000935220049273303",
  "text": "New benchmark from Google Research.\n\nModels get better at benchmarks, but do they actually get more factual?\n\nPrevious evaluations focused on narrow slices: grounding to documents, answering from memory, or using search. A model excelling at one often fails at another.\n\nThis new research introduces the FACTS Leaderboard, a comprehensive suite that measures factuality across four distinct dimensions:\n\n- FACTS Multimodal tests visual grounding combined with world knowledge on ~1,500 image-based questions.\n\n- FACTS Parametric assesses closed-book factoid recall using 2,104 adversarially-sampled questions that stumped open-weight models.\n\n- FACTS Search evaluates information-seeking with web tools across 1,884 queries including multi-hop reasoning.\n\n- FACTS Grounding v2 tests whether long-form responses stay faithful to provided documents.\n\nThe aggregate FACTS Score averages performance across all four.\n\nResults:\n\nGemini 3 Pro leads with 68.8% overall. Gemini 2.5 Pro follows at 62.1%, then GPT-5 at 61.8%.\n\nBut the sub-scores tell a different story. Claude models are precision-oriented, achieving high no-contradiction rates but hedging frequently on parametric questions. Claude 4 Sonnet doesn't attempt 45.1% of parametric queries. GPT models show higher coverage but more contradictions.\n\nOn multimodal, even the best models only reach ~47% accuracy when requiring both complete coverage and zero contradictions. On parametric knowledge, the spread is enormous: Gemini 3 Pro hits 76.4% while GPT-5 mini manages just 16.0%.\n\nThe benchmark maintains both public and private splits to prevent overfitting. All evaluation runs through Kaggle with standardized search tools for fair comparison. A single factuality number hides crucial behavioral differences. Some models guess aggressively, others hedge conservatively. This suite exposes those tradeoffs across the contexts where factuality actually matters.\n\nPaper: https://t.co/TCHOSGlQKs\n\nLearn how to evaluate and build effective AI agents in our academy: https://t.co/JBU5beIoD0",
  "source": "Twitter for iPhone",
  "retweetCount": 1,
  "replyCount": 1,
  "likeCount": 7,
  "quoteCount": 0,
  "viewCount": 703,
  "createdAt": "Tue Dec 16 14:25:06 +0000 2025",
  "lang": "en",
  "bookmarkCount": 9,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2000935220049273303",
  "displayTextRange": [
    0,
    280
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "omarsar0",
    "url": "https://x.com/omarsar0",
    "twitterUrl": "https://twitter.com/omarsar0",
    "id": "3448284313",
    "name": "elvis",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/939313677647282181/vZjFWtAn_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3448284313/1565974901",
    "description": "",
    "location": "DAIR.AI Academy",
    "followers": 279541,
    "following": 735,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Sep 04 12:59:26 +0000 2015",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 33964,
    "hasCustomTimelines": true,
    "isTranslator": true,
    "mediaCount": 4384,
    "statusesCount": 16753,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2000626975296405525"
    ],
    "profile_bio": {
      "description": "Building @dair_ai • Prev: Meta AI, Elastic, PhD • New cohort: https://t.co/GZMhf39NRs",
      "entities": {
        "description": {
          "urls": [
            {
              "display_url": "dair-ai.thinkific.com/courses/claude…",
              "expanded_url": "https://dair-ai.thinkific.com/courses/claude-code-for-everyone-2",
              "indices": [
                62,
                85
              ],
              "url": "https://t.co/GZMhf39NRs"
            }
          ],
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                9,
                17
              ],
              "name": "",
              "screen_name": "dair_ai"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "dair.ai",
              "expanded_url": "https://www.dair.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/XQto5ypkSM"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/RvENMBBOQn",
        "expanded_url": "https://twitter.com/omarsar0/status/2000935220049273303/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "h": 94,
                "w": 94,
                "x": 5,
                "y": 771
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "h": 94,
                "w": 94,
                "x": 5,
                "y": 771
              }
            ]
          }
        },
        "id_str": "2000935215833997312",
        "indices": [
          281,
          304
        ],
        "media_key": "3_2000935215833997312",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvEv/o2WuAACgACG8S/+zGa4dcAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG8S/+jZa4AAKAAIbxL/7MZrh1wAA",
            "media_key": "3_2000935215833997312"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G8S_-jZa4AABqAv.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 821,
              "w": 1466,
              "x": 0,
              "y": 0
            },
            {
              "h": 1466,
              "w": 1466,
              "x": 0,
              "y": 0
            },
            {
              "h": 1671,
              "w": 1466,
              "x": 0,
              "y": 0
            },
            {
              "h": 1800,
              "w": 900,
              "x": 403,
              "y": 0
            },
            {
              "h": 1800,
              "w": 1466,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1800,
          "width": 1466
        },
        "sizes": {
          "large": {
            "h": 1800,
            "w": 1466
          }
        },
        "type": "photo",
        "url": "https://t.co/RvENMBBOQn"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "arxiv.org/abs/2512.10791",
        "expanded_url": "https://arxiv.org/abs/2512.10791",
        "indices": [
          1929,
          1952
        ],
        "url": "https://t.co/TCHOSGlQKs"
      },
      {
        "display_url": "dair-ai.thinkific.com",
        "expanded_url": "https://dair-ai.thinkific.com/",
        "indices": [
          2022,
          2045
        ],
        "url": "https://t.co/JBU5beIoD0"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}