@
This is wild! There's an AI that literally rewrites its own trading code to beat the market. Not tuning parameters. Not learning patterns. Actually rewriting the Python functions that decide when to buy and sell. Let me explain this insanity: Traditional trading bots work like this: Human codes strategy AI adjusts weights/parameters Strategy structure stays FIXED Market changes β bot breaks Human fixes it manually This is exhausting and doesn't scale. ProFiT (Program Search for Financial Trading) does something completely different. It treats trading strategies as living organisms that evolve. Each strategy is actual Python code. Not weights. Not parameters. CODE. Here's the evolutionary loop: 1οΈβ£ Start with a basic strategy (say, MACD crossover) 2οΈβ£ LLM reads the code + performance report 3οΈβ£ LLM diagnoses weaknesses 4οΈβ£ LLM proposes improvements 5οΈβ£ New code gets backtested 6οΈβ£ If good β kept in population 7οΈβ£ Repeat forever The genius part? "Semantic mutation" Traditional genetic programming randomly flips bits of code (often breaking it). ProFiT's LLM actually understands what the code does: "This strategy lacks volatility filters. Add ATR-based gating to reduce false signals." LOGICAL evolution. And they don't keep just ONE best strategy. They maintain a POPULATION of all strategies that beat a minimum threshold. Why? Diversity prevents getting stuck in local optima. It's like keeping multiple species alive instead of just the "fittest" one. Quality-Diversity approach. Real results across 7 futures markets (E6, ES, Bitcoin, etc.): π Beat Buy-and-Hold in 77% of cases π Beat random strategies 100% of time π +44% average return improvement over seed strategies π +0.57 Sharpe ratio improvement Statistically significant (p < 0.05 on Wilcoxon tests) Let's look at one evolution path: Generation 0: Basic MACD crossover β Returns: -54% β 25 lines of code Generation 15: MACD + regime filter + ATR stops + volatility gates + debouncing β Returns: +0.77% β 90 lines of sophisticated logic The LLM built that complexity. How does this compare to prior work? π΄ Reinforcement Learning: Optimizes weights, structure stays fixed π΄ Classic GP: Random mutations, no reasoning π΄ Codex/AlphaCode: One-shot generation, no iteration π’ ProFiT: Iterative, semantic, empirically grounded It's a NEW paradigm. Pain points this solves: β Non-stationarity (markets change constantly) β Code evolution adapts structure, not just params β Black boxes you can't trust β Human-readable Python you can inspect β Constant human intervention β Autonomous improvement loop 11/15 The validation methodology is RIGOROUS: 5-fold walk-forward cross-validation 2.5 years train, 6 months validation, 6 months test 10-day dormant windows to prevent lookahead bias Fixed transaction costs (0.2%) Multiple seed strategies tested This isn't overfit garbage. Inspiration comes from wild places: 𧬠Genetic Programming (Koza) π€ GΓΆdel Machines (self-improving systems) π― MAP-Elites (quality-diversity) π§ LLM code generation (Codex) They mashed it all together and pointed it at financial markets. Current limitation they acknowledge: Testing against FIXED historical data doesn't show how it adapts to real-time regime changes. They're working on that. (Imagine this running live, evolving strategies as the market shifts beneath it...) Future directions they hint at: Evolving the prompts themselves (meta-optimization) Cross-asset strategy evolution Multi-parent recombination between strategies Real-time deployment with continuous adaptation This is just the beginning. Bottom line: We're shifting from "training AI to predict markets" to "AI that rewrites how it thinks about markets." Not parameter learning. Strategy evolution. The paper: "ProFiT: Program Search for Financial Trading" by Siper et al. Wild times ahead. π