🐦 Twitter Post Details

Viewing enriched Twitter post

@dair_ai

Sometimes less is more. More complexity in RL training isn't always the answer. The default approach to improving small language models with RL today involves multi-stage training pipelines, dynamic hyperparameter schedules, curriculum learning, and length penalties. But what if these techniques are solving problems that simpler approaches never create? This new research introduces JustRL, a minimal RL recipe that uses single-stage training with fixed hyperparameters to achieve state-of-the-art performance on 1.5B reasoning models. They stripped away everything non-essential. No progressive context lengthening. No adaptive temperature scheduling. No mid-training reference model resets. No length penalties. Just basic GRPO with fixed hyperparameters throughout training. Results: JustRL-DeepSeek-1.5B achieves 54.9% average accuracy across nine mathematical benchmarks. JustRL-Nemotron-1.5B reaches 64.3%. The best part: JustRL uses 2x less compute than more sophisticated approaches. On AIME 2024, performance improves from 28% to 58% over 4,000 steps of smooth, monotonic training without the collapses or plateaus that typically motivate complex interventions. Perhaps most surprising: ablations show that adding "standard tricks" like explicit length penalties and robust verifiers actually degrades performance by collapsing exploration. The model naturally compresses responses from 8,000 to 4,000-5,000 tokens without any penalty term. The same hyperparameters transfer across both models without tuning. No per-model optimization required. Paper: https://t.co/88X69gfBbU Learn to build with AI agents in our academy: https://t.co/zQXQt0PMbG

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2004235730613371251/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2004235730613371251/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-12-31T02:48:33.492917",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2004235730613371251",
  "url": "https://x.com/dair_ai/status/2004235730613371251",
  "twitterUrl": "https://twitter.com/dair_ai/status/2004235730613371251",
  "text": "Sometimes less is more.\n\nMore complexity in RL training isn't always the answer.\n\nThe default approach to improving small language models with RL today involves multi-stage training pipelines, dynamic hyperparameter schedules, curriculum learning, and length penalties.\n\nBut what if these techniques are solving problems that simpler approaches never create?\n\nThis new research introduces JustRL, a minimal RL recipe that uses single-stage training with fixed hyperparameters to achieve state-of-the-art performance on 1.5B reasoning models.\n\nThey stripped away everything non-essential.\n\nNo progressive context lengthening. No adaptive temperature scheduling. No mid-training reference model resets. No length penalties. Just basic GRPO with fixed hyperparameters throughout training.\n\nResults:\n\nJustRL-DeepSeek-1.5B achieves 54.9% average accuracy across nine mathematical benchmarks. JustRL-Nemotron-1.5B reaches 64.3%.\n\nThe best part: JustRL uses 2x less compute than more sophisticated approaches. On AIME 2024, performance improves from 28% to 58% over 4,000 steps of smooth, monotonic training without the collapses or plateaus that typically motivate complex interventions.\n\nPerhaps most surprising: ablations show that adding \"standard tricks\" like explicit length penalties and robust verifiers actually degrades performance by collapsing exploration. The model naturally compresses responses from 8,000 to 4,000-5,000 tokens without any penalty term.\n\nThe same hyperparameters transfer across both models without tuning. No per-model optimization required.\n\nPaper: https://t.co/88X69gfBbU\n\nLearn to build with AI agents in our academy: https://t.co/zQXQt0PMbG",
  "source": "Twitter for iPhone",
  "retweetCount": 67,
  "replyCount": 18,
  "likeCount": 425,
  "quoteCount": 3,
  "viewCount": 38529,
  "createdAt": "Thu Dec 25 17:00:09 +0000 2025",
  "lang": "en",
  "bookmarkCount": 410,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2004235730613371251",
  "displayTextRange": [
    0,
    304
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "dair_ai",
    "url": "https://x.com/dair_ai",
    "twitterUrl": "https://twitter.com/dair_ai",
    "id": "889050642903293953",
    "name": "DAIR.AI",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1643277398522187778/31dedbLo_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/889050642903293953/1742055232",
    "description": "",
    "location": "",
    "followers": 84575,
    "following": 1,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Sun Jul 23 09:12:45 +0000 2017",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 3959,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 104,
    "statusesCount": 2740,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2006002628250243125"
    ],
    "profile_bio": {
      "description": "Democratizing AI research, education, and technologies.",
      "entities": {
        "description": {},
        "url": {
          "urls": [
            {
              "display_url": "dair.ai",
              "expanded_url": "https://www.dair.ai/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/lkqPZtMmfU"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "display_url": "pic.twitter.com/ATjxKIbO3R",
        "expanded_url": "https://twitter.com/dair_ai/status/2004235730613371251/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "2004235726804918272",
        "indices": [
          305,
          328
        ],
        "media_key": "3_2004235726804918272",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARvQecZOWqAACgACG9B5xzFbAXMAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABG9B5xk5aoAAKAAIb0HnHMVsBcwAA",
            "media_key": "3_2004235726804918272"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/G9B5xk5aoAAhU-V.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 797,
              "w": 1424,
              "x": 0,
              "y": 0
            },
            {
              "h": 1424,
              "w": 1424,
              "x": 0,
              "y": 0
            },
            {
              "h": 1623,
              "w": 1424,
              "x": 0,
              "y": 0
            },
            {
              "h": 1780,
              "w": 890,
              "x": 0,
              "y": 0
            },
            {
              "h": 1780,
              "w": 1424,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1780,
          "width": 1424
        },
        "sizes": {
          "large": {
            "h": 1780,
            "w": 1424
          }
        },
        "type": "photo",
        "url": "https://t.co/ATjxKIbO3R"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "urls": [
      {
        "display_url": "arxiv.org/abs/2512.16649",
        "expanded_url": "https://arxiv.org/abs/2512.16649",
        "indices": [
          1576,
          1599
        ],
        "url": "https://t.co/88X69gfBbU"
      },
      {
        "display_url": "dair-ai.thinkific.com",
        "expanded_url": "https://dair-ai.thinkific.com/",
        "indices": [
          1647,
          1670
        ],
        "url": "https://t.co/zQXQt0PMbG"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}