@dair_ai
New research from IBM Research on Self-Improving Agents. Agents have "amnesia." An agent that struggles with a particular API authentication flow today will struggle with the same flow tomorrow unless manually updated. This paper introduces a framework for automatically extracting actionable learnings from agent execution trajectories and using them to improve future performance through contextual memory retrieval. The system generates three types of guidance: strategy tips from successful patterns, recovery tips from failure handling, and optimization tips from inefficient but successful executions. A Trajectory Intelligence Extractor performs semantic analysis of agent reasoning patterns while a Decision Attribution Analyzer traces backwards through reasoning steps to identify root causes. On the AppWorld benchmark, the memory-enhanced agent achieves 73.2% task goal completion compared to 69.6% baseline (+3.6 pp) and 64.3% scenario goal completion compared to 50.0% (+14.3 pp). The benefits scale with task complexity. Difficulty 3 tasks show the most dramatic improvements: +28.5 pp on scenario goals (19.1% to 47.6%), a 149% relative increase. Why it matters: Agents that learn from their own execution traces, not just from training data, can systematically improve without manual prompt engineering. The self-reinforcing cycle of better tips producing better trajectories producing better tips is a practical path toward self-improving agent systems. Paper: https://t.co/8IOIeEgFM5 Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c