🐦 Twitter Post Details

Viewing enriched Twitter post

@

This is wild! There's an AI that literally rewrites its own trading code to beat the market. Not tuning parameters. Not learning patterns. Actually rewriting the Python functions that decide when to buy and sell. Let me explain this insanity: Traditional trading bots work like this: Human codes strategy AI adjusts weights/parameters Strategy structure stays FIXED Market changes β†’ bot breaks Human fixes it manually This is exhausting and doesn't scale. ProFiT (Program Search for Financial Trading) does something completely different. It treats trading strategies as living organisms that evolve. Each strategy is actual Python code. Not weights. Not parameters. CODE. Here's the evolutionary loop: 1️⃣ Start with a basic strategy (say, MACD crossover) 2️⃣ LLM reads the code + performance report 3️⃣ LLM diagnoses weaknesses 4️⃣ LLM proposes improvements 5️⃣ New code gets backtested 6️⃣ If good β†’ kept in population 7️⃣ Repeat forever The genius part? "Semantic mutation" Traditional genetic programming randomly flips bits of code (often breaking it). ProFiT's LLM actually understands what the code does: "This strategy lacks volatility filters. Add ATR-based gating to reduce false signals." LOGICAL evolution. And they don't keep just ONE best strategy. They maintain a POPULATION of all strategies that beat a minimum threshold. Why? Diversity prevents getting stuck in local optima. It's like keeping multiple species alive instead of just the "fittest" one. Quality-Diversity approach. Real results across 7 futures markets (E6, ES, Bitcoin, etc.): πŸ“ˆ Beat Buy-and-Hold in 77% of cases πŸ“ˆ Beat random strategies 100% of time πŸ“ˆ +44% average return improvement over seed strategies πŸ“ˆ +0.57 Sharpe ratio improvement Statistically significant (p < 0.05 on Wilcoxon tests) Let's look at one evolution path: Generation 0: Basic MACD crossover β†’ Returns: -54% β†’ 25 lines of code Generation 15: MACD + regime filter + ATR stops + volatility gates + debouncing β†’ Returns: +0.77% β†’ 90 lines of sophisticated logic The LLM built that complexity. How does this compare to prior work? πŸ”΄ Reinforcement Learning: Optimizes weights, structure stays fixed πŸ”΄ Classic GP: Random mutations, no reasoning πŸ”΄ Codex/AlphaCode: One-shot generation, no iteration 🟒 ProFiT: Iterative, semantic, empirically grounded It's a NEW paradigm. Pain points this solves: ❌ Non-stationarity (markets change constantly) βœ… Code evolution adapts structure, not just params ❌ Black boxes you can't trust βœ… Human-readable Python you can inspect ❌ Constant human intervention βœ… Autonomous improvement loop 11/15 The validation methodology is RIGOROUS: 5-fold walk-forward cross-validation 2.5 years train, 6 months validation, 6 months test 10-day dormant windows to prevent lookahead bias Fixed transaction costs (0.2%) Multiple seed strategies tested This isn't overfit garbage. Inspiration comes from wild places: 🧬 Genetic Programming (Koza) πŸ€– GΓΆdel Machines (self-improving systems) 🎯 MAP-Elites (quality-diversity) 🧠 LLM code generation (Codex) They mashed it all together and pointed it at financial markets. Current limitation they acknowledge: Testing against FIXED historical data doesn't show how it adapts to real-time regime changes. They're working on that. (Imagine this running live, evolving strategies as the market shifts beneath it...) Future directions they hint at: Evolving the prompts themselves (meta-optimization) Cross-asset strategy evolution Multi-parent recombination between strategies Real-time deployment with continuous adaptation This is just the beginning. Bottom line: We're shifting from "training AI to predict markets" to "AI that rewrites how it thinks about markets." Not parameter learning. Strategy evolution. The paper: "ProFiT: Program Search for Financial Trading" by Siper et al. Wild times ahead. πŸš€

Media 1

πŸ“Š Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1999646267987702020/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1999646267987702020/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-13T02:11:09.434538",
  "pipeline_version": "2.0"
}

πŸ”§ Raw API Response

{
  "type": "tweet",
  "id": "1999646267987702020",
  "url": "",
  "twitterUrl": "",
  "text": "This is wild!  There's an AI that literally rewrites its own trading code to beat the market.\n\nNot tuning parameters. Not learning patterns.\n\nActually rewriting the Python functions that decide when to buy and sell.\n\nLet me explain this insanity:\n\nTraditional trading bots work like this:\n\nHuman codes strategy\n\nAI adjusts weights/parameters\nStrategy structure stays FIXED\nMarket changes β†’ bot breaks\nHuman fixes it manually\n\nThis is exhausting and doesn't scale.\n\nProFiT (Program Search for Financial Trading) does something completely different.\n\nIt treats trading strategies as living organisms that evolve.\n\nEach strategy is actual Python code. Not weights. Not parameters. CODE.\n\nHere's the evolutionary loop:\n\n1️⃣ Start with a basic strategy (say, MACD crossover)\n2️⃣ LLM reads the code + performance report\n3️⃣ LLM diagnoses weaknesses\n4️⃣ LLM proposes improvements\n5️⃣ New code gets backtested\n6️⃣ If good β†’ kept in population\n7️⃣ Repeat forever\n\nThe genius part? \"Semantic mutation\"\n\nTraditional genetic programming randomly flips bits of code (often breaking it).\n\nProFiT's LLM actually understands what the code does:\n\"This strategy lacks volatility filters. Add ATR-based gating to reduce false signals.\"\n\nLOGICAL evolution.\n\nAnd they don't keep just ONE best strategy.\nThey maintain a POPULATION of all strategies that beat a minimum threshold.\nWhy? Diversity prevents getting stuck in local optima.\n\nIt's like keeping multiple species alive instead of just the \"fittest\" one.\n\nQuality-Diversity approach.\n\nReal results across 7 futures markets (E6, ES, Bitcoin, etc.):\n\nπŸ“ˆ Beat Buy-and-Hold in 77% of cases\nπŸ“ˆ Beat random strategies 100% of time\nπŸ“ˆ +44% average return improvement over seed strategies\nπŸ“ˆ +0.57 Sharpe ratio improvement\n\nStatistically significant (p < 0.05 on Wilcoxon tests)\n\nLet's look at one evolution path:\n\nGeneration 0: Basic MACD crossover\nβ†’ Returns: -54%\nβ†’ 25 lines of code\n\nGeneration 15: MACD + regime filter + ATR stops + volatility gates + debouncing\nβ†’ Returns: +0.77%\nβ†’ 90 lines of sophisticated logic\n\nThe LLM built that complexity.\n\nHow does this compare to prior work?\n\nπŸ”΄ Reinforcement Learning: Optimizes weights, structure stays fixed\nπŸ”΄ Classic GP: Random mutations, no reasoning\nπŸ”΄ Codex/AlphaCode: One-shot generation, no iteration\n🟒 ProFiT: Iterative, semantic, empirically grounded\n\nIt's a NEW paradigm.\n\nPain points this solves:\n\n❌ Non-stationarity (markets change constantly)\nβœ… Code evolution adapts structure, not just params\n❌ Black boxes you can't trust\nβœ… Human-readable Python you can inspect\n❌ Constant human intervention\nβœ… Autonomous improvement loop\n11/15\n\nThe validation methodology is RIGOROUS:\n\n5-fold walk-forward cross-validation\n\n2.5 years train, 6 months validation, 6 months test\n\n10-day dormant windows to prevent lookahead bias\n\nFixed transaction costs (0.2%)\nMultiple seed strategies tested\nThis isn't overfit garbage.\n\nInspiration comes from wild places:\n🧬 Genetic Programming (Koza)\nπŸ€– GΓΆdel Machines (self-improving systems)\n🎯 MAP-Elites (quality-diversity)\n🧠 LLM code generation (Codex)\n\nThey mashed it all together and pointed it at financial markets.\n\nCurrent limitation they acknowledge:\n\nTesting against FIXED historical data doesn't show how it adapts to real-time regime changes.\n\nThey're working on that.\n(Imagine this running live, evolving strategies as the market shifts beneath it...)\n\nFuture directions they hint at:\n\nEvolving the prompts themselves (meta-optimization)\n\nCross-asset strategy evolution\n\nMulti-parent recombination between strategies\n\nReal-time deployment with continuous adaptation\n\nThis is just the beginning.\n\nBottom line:\nWe're shifting from \"training AI to predict markets\" to \"AI that rewrites how it thinks about markets.\"\n\nNot parameter learning.\n Strategy evolution.\nThe paper: \"ProFiT: Program Search for Financial Trading\" by Siper et al.\n\nWild times ahead. πŸš€",
  "source": "Twitter for iPhone",
  "retweetCount": 1,
  "replyCount": 1,
  "likeCount": 9,
  "quoteCount": 0,
  "viewCount": 482,
  "createdAt": "Sat Dec 13 01:03:16 +0000 2025",
  "lang": "en",
  "bookmarkCount": 6,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1999646267987702020",
  "displayTextRange": [
    0,
    278
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "",
    "url": "https://x.com/",
    "twitterUrl": "https://twitter.com/",
    "id": "3027431134",
    "name": "",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1998536249003380736/ew_SXOsl_normal.png",
    "coverPicture": "https://pbs.twimg.com/profile_banners/3027431134/1765452331",
    "description": "Quaternion Process Theory, Artificial (Intuition, Fluency, Empathy), Patterns for (Generative, Reason, Agentic) AI, \nhttps://t.co/fhXw0zjxXp",
    "location": "Arlington, VA",
    "followers": 48479,
    "following": 5094,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Tue Feb 10 00:02:32 +0000 2015",
    "entities": {
      "description": {
        "urls": [
          {
            "display_url": "intuitionmachine.gumroad.com",
            "expanded_url": "https://intuitionmachine.gumroad.com/",
            "url": "https://t.co/fhXw0zjxXp",
            "indices": [
              117,
              140
            ]
          }
        ]
      },
      "url": {
        "urls": [
          {
            "display_url": "medium.com/@intuitmachine",
            "expanded_url": "https://medium.com/@intuitmachine",
            "url": "https://t.co/IUUfwDjIBX",
            "indices": [
              0,
              23
            ]
          }
        ]
      }
    },
    "fastFollowersCount": 0,
    "favouritesCount": 59303,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 8063,
    "statusesCount": 113384,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1930575250988675403"
    ],
    "profile_bio": {
      "description": "Quaternion Process Theory, Artificial (Intuition, Fluency, Empathy), Patterns for (Generative, Reason, Agentic) AI, \nhttps://t.co/fhXw0zjxXp"
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.x.com/wlE2VgVaqK",
        "expanded_url": "https://x.com/IntuitMachine/status/1999646267987702020/photo/1",
        "id_str": "1999646247590723585",
        "indices": [
          279,
          302
        ],
        "media_key": "3_1999646247590723585",
        "media_url_https": "https://pbs.twimg.com/media/G8ArqugWgAEs7SP.jpg",
        "type": "photo",
        "url": "https://t.co/wlE2VgVaqK",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "x": 40,
                "y": 400,
                "h": 42,
                "w": 42
              }
            ]
          },
          "medium": {
            "faces": [
              {
                "x": 40,
                "y": 400,
                "h": 42,
                "w": 42
              }
            ]
          },
          "small": {
            "faces": [
              {
                "x": 23,
                "y": 230,
                "h": 24,
                "w": 24
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "x": 40,
                "y": 400,
                "h": 42,
                "w": 42
              }
            ]
          }
        },
        "sizes": {
          "large": {
            "h": 660,
            "w": 1182,
            "resize": "fit"
          },
          "medium": {
            "h": 660,
            "w": 1182,
            "resize": "fit"
          },
          "small": {
            "h": 380,
            "w": 680,
            "resize": "fit"
          },
          "thumb": {
            "h": 150,
            "w": 150,
            "resize": "crop"
          }
        },
        "original_info": {
          "height": 660,
          "width": 1182,
          "focus_rects": [
            {
              "x": 0,
              "y": 0,
              "w": 1179,
              "h": 660
            },
            {
              "x": 0,
              "y": 0,
              "w": 660,
              "h": 660
            },
            {
              "x": 0,
              "y": 0,
              "w": 579,
              "h": 660
            },
            {
              "x": 0,
              "y": 0,
              "w": 330,
              "h": 660
            },
            {
              "x": 0,
              "y": 0,
              "w": 1182,
              "h": 660
            }
          ]
        },
        "allow_download_status": {
          "allow_download": true
        },
        "media_results": {
          "result": {
            "media_key": "3_1999646247590723585"
          }
        }
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "article": null
}