🐦 Twitter Post Details

Viewing enriched Twitter post

@ValerioCapraro

Major preprint just out! We compare how humans and LLMs form judgments across seven epistemological stages. We highlight seven fault lines, points at which humans and LLMs fundamentally diverge: The Grounding fault: Humans anchor judgment in perceptual, embodied, and social experience, whereas LLMs begin from text alone, reconstructing meaning indirectly from symbols. The Parsing fault: Humans parse situations through integrated perceptual and conceptual processes; LLMs perform mechanical tokenization that yields a structurally convenient but semantically thin representation. The Experience fault: Humans rely on episodic memory, intuitive physics and psychology, and learned concepts; LLMs rely solely on statistical associations encoded in embeddings. The Motivation fault: Human judgment is guided by emotions, goals, values, and evolutionarily shaped motivations; LLMs have no intrinsic preferences, aims, or affective significance. The Causality fault: Humans reason using causal models, counterfactuals, and principled evaluation; LLMs integrate textual context without constructing causal explanations, depending instead on surface correlations. The Metacognitive fault: Humans monitor uncertainty, detect errors, and can suspend judgment; LLMs lack metacognition and must always produce an output, making hallucinations structurally unavoidable. The Value fault: Human judgments reflect identity, morality, and real-world stakes; LLM "judgments" are probabilistic next-token predictions without intrinsic valuation or accountability. Despite these fault lines, humans systematically over-believe LLM outputs, because fluent and confident language produce a credibility bias. We argue that this creates a structural condition, Epistemia: linguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without actually knowing. To address Epistemia, we propose three complementary strategies: epistemic evaluation, epistemic governance, and epistemic literacy. Full paper in the first reply. Joint with @Walter4C & @matjazperc

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2003457899805233538/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2003457899805233538/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-31T02:48:34.980535",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2003457899805233538",
  "url": "https://x.com/ValerioCapraro/status/2003457899805233538",
  "twitterUrl": "https://twitter.com/ValerioCapraro/status/2003457899805233538",
  "text": "Major preprint just out!\n\nWe compare how humans and LLMs form judgments across seven epistemological stages. \n\nWe highlight seven fault lines, points at which humans and LLMs fundamentally diverge:\n\nThe Grounding fault: Humans anchor judgment in perceptual, embodied, and social experience, whereas LLMs begin from text alone, reconstructing meaning indirectly from symbols.\n\nThe Parsing fault: Humans parse situations through integrated perceptual and conceptual processes; LLMs perform mechanical tokenization that yields a structurally convenient but semantically thin representation.\n\nThe Experience fault: Humans rely on episodic memory, intuitive physics and psychology, and learned concepts; LLMs rely solely on statistical associations encoded in embeddings.\n\nThe Motivation fault: Human judgment is guided by emotions, goals, values, and evolutionarily shaped motivations; LLMs have no intrinsic preferences, aims, or affective significance.\n\nThe Causality fault: Humans reason using causal models, counterfactuals, and principled evaluation; LLMs integrate textual context without constructing causal explanations, depending instead on surface correlations.\n\nThe Metacognitive fault: Humans monitor uncertainty, detect errors, and can suspend judgment; LLMs lack metacognition and must always produce an output, making hallucinations structurally unavoidable.\n\nThe Value fault: Human judgments reflect identity, morality, and real-world stakes; LLM \"judgments\" are probabilistic next-token predictions without intrinsic valuation or accountability. \n\nDespite these fault lines, humans systematically over-believe LLM outputs, because fluent and confident language produce a credibility bias.\n\nWe argue that this creates a structural condition, Epistemia:\nlinguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without actually knowing.\n\nTo address Epistemia, we propose three complementary strategies: epistemic evaluation, epistemic governance, and epistemic literacy. \n\nFull paper in the first reply.\n\nJoint with @Walter4C & @matjazperc",
  "source": "Twitter for iPhone",
  "retweetCount": 1256,
  "replyCount": 207,
  "likeCount": 4417,
  "quoteCount": 155,
  "viewCount": 598997,
  "createdAt": "Tue Dec 23 13:29:20 +0000 2025",
  "lang": "en",
  "bookmarkCount": 4258,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2003457899805233538",
  "displayTextRange": [
    0,
    303
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "ValerioCapraro",
    "url": "https://x.com/ValerioCapraro",
    "twitterUrl": "https://twitter.com/ValerioCapraro",
    "id": "1856509825",
    "name": "Valerio Capraro",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1987826440117551104/Nm8PaM6g_normal.jpg",
    "coverPicture": "",
    "description": "",
    "location": "Milano, Lombardia",
    "followers": 10125,
    "following": 228,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Thu Sep 12 06:24:52 +0000 2013",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 4938,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 424,
    "statusesCount": 1430,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1800528441546031494"
    ],
    "profile_bio": {
      "description": "Associate Professor at Uni Milan-Bicocca. I write about social behaviour and AI.",
      "entities": {
        "description": {},
        "url": {
          "urls": [
            {
              "display_url": "caprarovalerio.com",
              "expanded_url": "http://caprarovalerio.com",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/zuMx6bmZwL"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/QJIKTeuolS",
        "expanded_url": "https://twitter.com/ValerioCapraro/status/2003457899805233538/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "2003457649451380736",
        "indices": [
          304,
          327
        ],
        "media_key": "3_2003457649451380736",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvNth4OFrAACgACG822WFhXUYIAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG822Hg4WsAAKAAIbzbZYWFdRggAA",
            "media_key": "3_2003457649451380736"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G822Hg4WsAAv-C2.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 672,
              "w": 1200,
              "x": 0,
              "y": 1009
            },
            {
              "h": 1200,
              "w": 1200,
              "x": 0,
              "y": 481
            },
            {
              "h": 1368,
              "w": 1200,
              "x": 0,
              "y": 313
            },
            {
              "h": 1681,
              "w": 841,
              "x": 0,
              "y": 0
            },
            {
              "h": 1681,
              "w": 1200,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1681,
          "width": 1200
        },
        "sizes": {
          "large": {
            "h": 1681,
            "w": 1200
          }
        },
        "type": "photo",
        "url": "https://t.co/QJIKTeuolS"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "user_mentions": [
      {
        "id_str": "468764094",
        "indices": [
          2065,
          2074
        ],
        "name": "W. Quattrociocchi",
        "screen_name": "Walter4C"
      },
      {
        "id_str": "529730279",
        "indices": [
          2077,
          2088
        ],
        "name": "Matjaz Perc",
        "screen_name": "matjazperc"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}