🐦 Twitter Post Details

Viewing enriched Twitter post

@koylanai

AI Agent Personas should simulate the structure of human reasoning. I’ve been arguing that you cannot "invent" a digital expert agent using just prompt engineering. You have to extract the expert via deep interviewing. A new NeurIPS paper, "Simulating Society Requires Simulating Thought" reinforces everything we've discussed about why thin, synthetic LLM personas fail. Most AI agents operate as "behaviorists." When you prompt an LLM to "act like a senior economist," it relies on surface-level correlations from training data. It generates text that sounds expert-like, but lacks any internal belief structure. 1. Logical Inconsistency: Without an internal model of how beliefs are formed, agents support a policy in one context but oppose it in another. The paper calls this "intervention-invariance mismatch" - beliefs don't update coherently when assumptions change. 2. Illusion of Consensus: In multi-agent simulations, LLMs converge toward the median view (even more positive emotions as the other paper mentions) of the training data. They agree not because of shared reasoning, but because their statistical priors push them toward the center. Your expert's contrarian, hard-won perspective gets averaged out. 3. Identity Flattening: LLMs reproduce stereotypical portrayals that erase intersectional variation. "The rich, positional knowledge of real-world stakeholders is replaced with monolithic, decontextualized simulations." To fix this, we have to move from simulating speech to simulating reasoning. The authors propose a "Cognitive Modeling" approach. "beyond output-level alignment toward aligning the internal reasoning traces of generative agents." Their solution is SEMI-STRUCTURED INTERVIEWS to extract what they call "cognitive motifs" - minimal causal reasoning units that capture how a specific person actually thinks. This is exactly why we built an interviewer system instead of a persona generator. You have to extract their actual belief structure through conversation. Instead of predicting the next word, the agent must possess "Reasoning Fidelity", a structured map of beliefs, causal logic, and cognitive motifs. How do you get this map? You can't prompt for it. You have to interview for it, with AI. The paper explicitly validates the architecture we’ve built: using semi-structured interviews to elicit "causal explanations" and "reasoning traces". This confirms why our Interviewer + Note-Taker multi-agent system is critical. - The Interviewer builds the "Peer Status" necessary to get the expert to open up. - The Note-Taker (the cognitive layer) extracts the "Cognitive Motifs", the distinctive logic blocks that define how that specific expert solves problems. We are moving beyond the era of "acting like an expert" to Generative Minds; agents that embody the positional individuality and causal logic of the people they represent. If you're building AI agents for strategy, decision-making, or stakeholder modelling, start by interviewing the human aspects of your agents.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1999192104850133146/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1999192104850133146/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-13T18:59:40.967332",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "1999192104850133146",
  "url": "https://x.com/koylanai/status/1999192104850133146",
  "twitterUrl": "https://twitter.com/koylanai/status/1999192104850133146",
  "text": "AI Agent Personas should simulate the structure of human reasoning.\n\nI’ve been arguing that you cannot \"invent\" a digital expert agent using just prompt engineering. You have to extract the expert via deep interviewing. \n\nA new NeurIPS paper, \"Simulating Society Requires Simulating Thought\" reinforces everything we've discussed about why thin, synthetic LLM personas fail.\n\nMost AI agents operate as \"behaviorists.\" When you prompt an LLM to \"act like a senior economist,\" it relies on surface-level correlations from training data. It generates text that sounds expert-like, but lacks any internal belief structure.\n\n1. Logical Inconsistency:\nWithout an internal model of how beliefs are formed, agents support a policy in one context but oppose it in another. The paper calls this \"intervention-invariance mismatch\" - beliefs don't update coherently when assumptions change.\n\n2. Illusion of Consensus:\nIn multi-agent simulations, LLMs converge toward the median view (even more positive emotions as the other paper mentions) of the training data. They agree not because of shared reasoning, but because their statistical priors push them toward the center. Your expert's contrarian, hard-won perspective gets averaged out.\n\n3. Identity Flattening:\nLLMs reproduce stereotypical portrayals that erase intersectional variation. \"The rich, positional knowledge of real-world stakeholders is replaced with monolithic, decontextualized simulations.\"\n\nTo fix this, we have to move from simulating speech to simulating reasoning. The authors propose a \"Cognitive Modeling\" approach. \n\n\"beyond output-level alignment toward aligning the internal reasoning traces of generative agents.\"\n\nTheir solution is SEMI-STRUCTURED INTERVIEWS to extract what they call \"cognitive motifs\" - minimal causal reasoning units that capture how a specific person actually thinks.\n\nThis is exactly why we built an interviewer system instead of a persona generator. You have to extract their actual belief structure through conversation.\n\nInstead of predicting the next word, the agent must possess \"Reasoning Fidelity\", a structured map of beliefs, causal logic, and cognitive motifs.\n\nHow do you get this map? \nYou can't prompt for it. You have to interview for it, with AI. The paper explicitly validates the architecture we’ve built: using semi-structured interviews to elicit \"causal explanations\" and \"reasoning traces\".\n\nThis confirms why our Interviewer + Note-Taker multi-agent system is critical.\n- The Interviewer builds the \"Peer Status\" necessary to get the expert to open up.\n- The Note-Taker (the cognitive layer) extracts the \"Cognitive Motifs\", the distinctive logic blocks that define how that specific expert solves problems.\n\nWe are moving beyond the era of \"acting like an expert\" to Generative Minds; agents that embody the positional individuality and causal logic of the people they represent.\n\nIf you're building AI agents for strategy, decision-making, or stakeholder modelling, start by interviewing the human aspects of your agents.",
  "source": "Twitter for iPhone",
  "retweetCount": 64,
  "replyCount": 19,
  "likeCount": 345,
  "quoteCount": 4,
  "viewCount": 28044,
  "createdAt": "Thu Dec 11 18:58:35 +0000 2025",
  "lang": "en",
  "bookmarkCount": 352,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1999192104850133146",
  "displayTextRange": [
    0,
    272
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "koylanai",
    "url": "https://x.com/koylanai",
    "twitterUrl": "https://twitter.com/koylanai",
    "id": "1603551009854423040",
    "name": "Muratcan Koylan",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1961793099031527424/Nt1ZvaJe_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1603551009854423040/1759435051",
    "description": "",
    "location": "Toronto, Canada 🇨🇦",
    "followers": 9884,
    "following": 4004,
    "status": "",
    "canDm": true,
    "canMediaTag": false,
    "createdAt": "Fri Dec 16 00:42:17 +0000 2022",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 53300,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 2495,
    "statusesCount": 10449,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1652388457036017667"
    ],
    "profile_bio": {
      "description": "AI Agent Systems Manager @ https://t.co/auQ835cMeJ | prompt design & context engineering, persona embodiment and multi-agent architectures | https://t.co/ZT3KGWkFQt",
      "entities": {
        "description": {
          "urls": [
            {
              "display_url": "99ravens.ai",
              "expanded_url": "http://99ravens.ai",
              "indices": [
                27,
                50
              ],
              "url": "https://t.co/auQ835cMeJ"
            },
            {
              "display_url": "github.com/muratcankoylan",
              "expanded_url": "http://github.com/muratcankoylan",
              "indices": [
                141,
                164
              ],
              "url": "https://t.co/ZT3KGWkFQt"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "muratcankoylan.com",
              "expanded_url": "https://muratcankoylan.com/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/SSBIb4OtFw"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/XlQTnxC4WO",
        "expanded_url": "https://twitter.com/koylanai/status/1999192104850133146/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "1999183964221702144",
        "indices": [
          273,
          296
        ],
        "media_key": "3_1999183964221702144",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARu+hzksF4AACgACG76OoI9bkJoAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG76HOSwXgAAKAAIbvo6gj1uQmgAA",
            "media_key": "3_1999183964221702144"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G76HOSwXgAAZaJB.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 783,
              "w": 1398,
              "x": 0,
              "y": 0
            },
            {
              "h": 1398,
              "w": 1398,
              "x": 0,
              "y": 0
            },
            {
              "h": 1594,
              "w": 1398,
              "x": 0,
              "y": 0
            },
            {
              "h": 1704,
              "w": 852,
              "x": 0,
              "y": 0
            },
            {
              "h": 1704,
              "w": 1398,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1704,
          "width": 1398
        },
        "sizes": {
          "large": {
            "h": 1704,
            "w": 1398
          }
        },
        "type": "photo",
        "url": "https://t.co/XlQTnxC4WO"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {},
  "quoted_tweet": {
    "type": "tweet",
    "id": "1998530190847390025",
    "url": "",
    "twitterUrl": "",
    "text": "",
    "source": "Twitter for iPhone",
    "retweetCount": 0,
    "replyCount": 0,
    "likeCount": 0,
    "quoteCount": 0,
    "viewCount": 0,
    "createdAt": "",
    "lang": "",
    "bookmarkCount": 0,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "",
    "displayTextRange": [],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {},
    "extendedEntities": {},
    "card": null,
    "place": {},
    "entities": {},
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "isLimitedReply": false,
    "article": null
  },
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}