🐦 Twitter Post Details

Viewing enriched Twitter post

@berryxia

Gary Marcus 又放大招了! 他直接把 Claude Code 源码泄露后的核心真相点破: ✅ Claude Code 是 LLM 时代以来最大进步 ✅ 但它根本不是纯 LLM,也不是纯深度学习 ✅ 核心文件 print.ts 足足 3167 行,塞满了 if-then 分支 + 确定性符号逻辑 Anthropic 在关键时刻还是靠经典符号 AI来保底,才让 Agent 真正可靠。 这波操作,等于直接验证了 Marcus 过去 20 多年一直喊的 Neurosymbolic AI(神经符号混合)路线! Scaling 不再是唯一答案,混合路线才是未来 完整长文值得细读👇

📊 Media Metadata

{
  "score": 0.4,
  "score_components": {
    "author": 0.09,
    "engagement": 0.0,
    "quality": 0.1,
    "source": 0.135,
    "nlp": 0.05,
    "recency": 0.025
  },
  "scored_at": "2026-04-12T01:07:17.743695",
  "import_source": "api_import",
  "source_tagged_at": "2026-04-12T01:07:17.743704",
  "enriched": true,
  "enriched_at": "2026-04-12T01:07:17.743706"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2043092256958316815",
  "url": "https://x.com/berryxia/status/2043092256958316815",
  "twitterUrl": "https://twitter.com/berryxia/status/2043092256958316815",
  "text": "Gary Marcus 又放大招了!\n\n他直接把 Claude Code 源码泄露后的核心真相点破:\n\n✅ Claude Code 是 LLM 时代以来最大进步\n✅ 但它根本不是纯 LLM,也不是纯深度学习  \n✅ 核心文件 print.ts 足足 3167 行,塞满了 if-then 分支 + 确定性符号逻辑  \n\nAnthropic 在关键时刻还是靠经典符号 AI来保底,才让 Agent 真正可靠。\n\n这波操作,等于直接验证了 Marcus 过去 20 多年一直喊的 Neurosymbolic AI(神经符号混合)路线!\n\nScaling 不再是唯一答案,混合路线才是未来\n\n完整长文值得细读👇",
  "source": "Twitter for iPhone",
  "retweetCount": 5,
  "replyCount": 0,
  "likeCount": 9,
  "quoteCount": 0,
  "viewCount": 2554,
  "createdAt": "Sat Apr 11 22:22:07 +0000 2026",
  "lang": "zh",
  "bookmarkCount": 17,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2043092256958316815",
  "displayTextRange": [
    0,
    191
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "berryxia",
    "url": "https://x.com/berryxia",
    "twitterUrl": "https://twitter.com/berryxia",
    "id": "431617343",
    "name": "Berryxia.AI",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/2016043556964605952/Tqi9KA2r_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/431617343/1765523308",
    "description": "",
    "location": "",
    "followers": 33254,
    "following": 517,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Thu Dec 08 13:56:00 +0000 2011",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 2773,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 692,
    "statusesCount": 4683,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2042609409667535327"
    ],
    "profile_bio": {
      "description": "🧠✨Building AI tools AI System Prompt ❤️🐳      💻    Love Design & Coding & Share Prompt! 💼📮:Andyhuo@me.com",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": []
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {},
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": {
    "type": "tweet",
    "id": "2042987819333738929",
    "url": "https://x.com/GaryMarcus/status/2042987819333738929",
    "twitterUrl": "https://twitter.com/GaryMarcus/status/2042987819333738929",
    "text": "Claude Code is not AGI, but it is the single biggest advance in AI since the LLM.\n\nBut the thing is, Claude Code is NOT a pure LLM. And it’s not pure deep learning. Not even close. \n\nAnd that changes everything. \n\nThe source code leak proves it. Tucked away at its center is a 3,167 line kernel called print.ts.\n\nprint.ts is a pattern matching. And pattern matching is supposed to be the *strength* of LLMs. \n\nBut Anthropic figured out that if you really need to get your patterns right, you can’t trust a pure LLM. They are too probabilistic. And too erratic.\n\nInstead, the way Anthropic built that kernel is straight out of classical symbolic AI.  For example, it is in large part a big IF-THEN conditional, with 486 branch points and 12 levels of nesting — all inside a deterministic, symbolic loop that the real godfathers of AI, people like John McCarthy and Marvin Minsky and Herb Simon, would have instantly recognized.*\n\nPutting things differently, Anthropic, when push came to shove, went exactly where I long said the field needed to go (and where @geoffreyhinton said we didn’t need to go): to Neurosymbolic AI. \n\nThat’s right, the biggest advance since the LLM was neurosymbolic.  AlphaFold, AlphaEvolve, AlphaProof, and AlphaGeometry are all neurosymbolic, too; so is Code Interpreter; when you are calling code, you are asking symbolic AI do an important part of the work.\n\nClaude Code isn’t better because of scaling. \n\nIt’s better because Anthropic accepted the importance of using classical AI techniques alongside neural networks — precisely marriage I have long advocated. \n\nIt’s *massive* vindication for me (go see my 2019 debate with Bengio for context, or to my 2001 book, The Algebraic Mind), but it still ain’t perfect, or even close.\n\nWhat we really need to do to get trustworthy AI rather than the current unpredictable “jagged” mess, is to go in the knowledge-, reasoning-, and world-model driven direction I laid out in 2020, in an article called the Next Decade in AI, in which neurosymbolic AI is just the *starting point* in a longer journey.*\n\nRead that article if you want to know what else we need to do next. \n\nThe first part has already come to pass. In time, other three will, too.\n\nMeanwhile, the implications for the allocation of capital are pretty massive: smartly adding in bits of symbolic AI can do a lot more than scaling alone, and even Anthropic as now discovered (though they won’t say) scaling is no longer the essence of innovation. \n\nThe paradigm has changed.\n\n—\n*Claude Code is plainly neurosymbolic but the code part is a mess; as Ernie Davis and I argued in Rebooting AI in 2019, we also need major advances in software engineering. But that’s a story for another day.",
    "source": "Twitter for iPhone",
    "retweetCount": 318,
    "replyCount": 117,
    "likeCount": 1806,
    "quoteCount": 47,
    "viewCount": 258465,
    "createdAt": "Sat Apr 11 15:27:07 +0000 2026",
    "lang": "en",
    "bookmarkCount": 1733,
    "isReply": false,
    "inReplyToId": null,
    "conversationId": "2042987819333738929",
    "displayTextRange": [
      0,
      276
    ],
    "inReplyToUserId": null,
    "inReplyToUsername": null,
    "author": {
      "type": "user",
      "userName": "GaryMarcus",
      "url": "https://x.com/GaryMarcus",
      "twitterUrl": "https://twitter.com/GaryMarcus",
      "id": "232294292",
      "name": "Gary Marcus",
      "isVerified": false,
      "isBlueVerified": true,
      "verifiedType": null,
      "profilePicture": "https://pbs.twimg.com/profile_images/1907157274637869057/ZS9Ui6fn_normal.jpg",
      "coverPicture": "https://pbs.twimg.com/profile_banners/232294292/1727132331",
      "description": "",
      "location": "",
      "followers": 215224,
      "following": 6950,
      "status": "",
      "canDm": true,
      "canMediaTag": false,
      "createdAt": "Thu Dec 30 19:20:12 +0000 2010",
      "entities": {
        "description": {
          "urls": []
        },
        "url": {}
      },
      "fastFollowersCount": 0,
      "favouritesCount": 85067,
      "hasCustomTimelines": true,
      "isTranslator": false,
      "mediaCount": 3595,
      "statusesCount": 56271,
      "withheldInCountries": [],
      "affiliatesHighlightedLabel": {},
      "possiblySensitive": false,
      "pinnedTweetIds": [
        "1962212908840239366"
      ],
      "profile_bio": {
        "description": "“In the aftermath of GPT-5’s launch … the views of critics like Marcus seem increasingly moderate.” —@newyorker",
        "entities": {
          "description": {
            "hashtags": [],
            "symbols": [],
            "urls": [],
            "user_mentions": [
              {
                "id_str": "0",
                "indices": [
                  101,
                  111
                ],
                "name": "",
                "screen_name": "newyorker"
              }
            ]
          },
          "url": {
            "urls": [
              {
                "display_url": "garymarcus.substack.com",
                "expanded_url": "http://garymarcus.substack.com",
                "indices": [
                  0,
                  23
                ],
                "url": "https://t.co/RrElrTVHSz"
              }
            ]
          }
        }
      },
      "isAutomated": false,
      "automatedBy": null
    },
    "extendedEntities": {},
    "card": null,
    "place": {},
    "entities": {
      "hashtags": [],
      "symbols": [],
      "urls": [],
      "user_mentions": [
        {
          "id_str": "1084212657761148928",
          "indices": [
            1058,
            1073
          ],
          "name": "Geoffrey Hinton",
          "screen_name": "geoffreyhinton"
        }
      ]
    },
    "quoted_tweet": null,
    "retweeted_tweet": null,
    "isLimitedReply": false,
    "communityInfo": null,
    "article": null
  },
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "communityInfo": null,
  "article": null
}