@dair_ai
// Scaling Coding Agents via Atomic Skills // Most coding agents train end-to-end on full tasks like resolving GitHub issues. But complex software engineering is really a composition of simpler skills, and training on the composite makes it hard to improve the parts. This research formalizes five atomic skills for coding agents: code localization, code editing, unit-test generation, issue reproduction, and code review. Instead of optimizing for a single composite task, the authors apply joint RL across all five skills simultaneously. The result: an 18.7% improvement that transfers to unseen composite tasks like bug-fixing and code refactoring without any task-specific training. Decomposition is a powerful scaling strategy. Rather than throwing bigger models at harder tasks, this work shows that systematically training the building blocks produces agents that generalize better. The atomic skills framework also gives you a clearer picture of where an agent is actually failing. Paper: https://t.co/7LhB8hcTcJ Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c