@arankomatsuzaki
AgentFounder: Scaling Agents via Continual Pre-training ⢠First to propose Agentic CPT ā builds agentic foundation models before fine-tuning ⢠Solves post-training bottlenecks (capabilities + alignment conflict) ⢠Data synthesis: First-order (planning/actions) + Higher-order (multi-step decision) ⢠Two-stage training (32K ā 128K context) ⢠SOTA: 39.9% BrowseComp-en, 72.8% GAIA abs: https://t.co/LTCuW2LCo4 (5/N)