🐦 Twitter Post Details

Viewing enriched Twitter post

@_akhaliq

Meta presents GLoRe When, Where, and How to Improve LLM Reasoning via Global and Local Refinements State-of-the-art language models can exhibit impressive reasoning refinement capabilities on math, science or coding tasks. However, recent work demonstrates that even the best models struggle to identify when and where to refine without access to external feedback. Outcome-based Reward Models (ORMs), trained to predict correctness of the final answer indicating when to refine, offer one convenient solution for deciding when to refine. Process Based Reward Models (PRMs), trained to predict correctness of intermediate steps, can then be used to indicate where to refine. But they are expensive to train, requiring extensive human annotations. In this paper, we propose Stepwise ORMs (SORMs) which are trained, only on synthetic data, to approximate the expected future reward of the optimal policy or V^{star}. More specifically, SORMs are trained to predict the correctness of the final answer when sampling the current policy many times (rather than only once as in the case of ORMs). Our experiments show that SORMs can more accurately detect incorrect reasoning steps compared to ORMs, thus improving downstream accuracy when doing refinements. We then train global refinement models, which take only the question and a draft solution as input and predict a corrected solution, and local refinement models which also take as input a critique indicating the location of the first reasoning error. We generate training data for both models synthetically by reusing data used to train the SORM. We find combining global and local refinements, using the ORM as a reranker, significantly outperforms either one individually, as well as a best of three sample baseline.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1759808947559522682/media_0.png",
      "type": "photo",
      "original_url": "https://pbs.twimg.com/media/GGwY11iWYAAn3uF.png",
      "download_date": "2025-08-13T06:02:36.124202",
      "stored_in_supabase": true
    }
  ],
  "conversion_date": "2025-08-13T00:41:08.860365",
  "format_converted": true,
  "original_structure": "had_media_only"
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2014-04-27T00:20:12.000Z",
    "default_profile_image": false,
    "description": "AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗)\n\ndm for promo",
    "fast_followers_count": 0,
    "favourites_count": 29719,
    "followers_count": 295550,
    "friends_count": 2398,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3707,
    "location": "subscribe → ",
    "media_count": 15522,
    "name": "AK",
    "normal_followers_count": 295550,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/2465283662/1610997549",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1451191636810092553/kpM5Fe12_normal.jpg",
    "screen_name": "_akhaliq",
    "statuses_count": 26684,
    "translator_type": "none",
    "url": "https://t.co/TbGnXZJwEc",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "2465283662"
  },
  "id": "1759808947559522682",
  "conversation_id": "1759808947559522682",
  "full_text": "Meta presents GLoRe\n\nWhen, Where, and How to Improve LLM Reasoning via Global and Local Refinements\n\nState-of-the-art language models can exhibit impressive reasoning refinement capabilities on math, science or coding tasks. However, recent work demonstrates that even the best models struggle to identify when and where to refine without access to external feedback. Outcome-based Reward Models (ORMs), trained to predict correctness of the final answer indicating when to refine, offer one convenient solution for deciding when to refine. Process Based Reward Models (PRMs), trained to predict correctness of intermediate steps, can then be used to indicate where to refine. But they are expensive to train, requiring extensive human annotations. In this paper, we propose Stepwise ORMs (SORMs) which are trained, only on synthetic data, to approximate the expected future reward of the optimal policy or V^{star}. More specifically, SORMs are trained to predict the correctness of the final answer when sampling the current policy many times (rather than only once as in the case of ORMs). Our experiments show that SORMs can more accurately detect incorrect reasoning steps compared to ORMs, thus improving downstream accuracy when doing refinements. We then train global refinement models, which take only the question and a draft solution as input and predict a corrected solution, and local refinement models which also take as input a critique indicating the location of the first reasoning error. We generate training data for both models synthetically by reusing data used to train the SORM. We find combining global and local refinements, using the ORM as a reranker, significantly outperforms either one individually, as well as a best of three sample baseline.",
  "reply_count": 2,
  "retweet_count": 44,
  "favorite_count": 174,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GGwY11iWYAAn3uF.png",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/_akhaliq/status/1759808947559522682",
  "created_at": "2024-02-20T05:15:23.000Z",
  "#sort_index": "1759808947559522682",
  "view_count": 21624,
  "quote_count": 1,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/_akhaliq/status/1759808947559522682"
}