@ModelScope2022
π New on ModelScope: QwenLong-L1.5 is now fully open-source! A 30B model (3B active params) that matches GPT-5 & Gemini-2.5-Pro in long-context reasoning. π₯ Key wins: β +31.7 pts on OpenAIβs MRCR (128K context β SOTA across all models) β Matches Gemini-2.5-Pro on 6 major long-QA benchmarks β +9.69 on CorpusQA, +6.16 on LongBench-V2 How? Three breakthroughs: 1οΈβ£ Synthetic data at scale: 14.1K long-reasoning samples from 9.2B tokens β no human labeling. Avg. length: 34K tokens (max: 119K!). 2οΈβ£ Stable RL training: Task-balanced sampling + Adaptive Entropy-Controlled Policy Optimization (AEPO) for reliable long-sequence learning. 3οΈβ£ Memory-augmented architecture: Iterative memory updates beyond the 256K window β +9.48 pts on 1Mβ4M token tasks! All weights, data recipes, and training code are open: π Paper: https://t.co/MzlHXeardA π₯ Model: https://t.co/kE3q4HLhdk π» GitHub: https://t.co/KghF3mp9H3