🐦 Twitter Post Details

Viewing enriched Twitter post

@_akhaliq

How to Train Data-Efficient LLMs paper page: https://t.co/xP1PYQURjY The training of large language models (LLMs) is expensive. In this paper, we study data-efficient approaches for pre-training LLMs, i.e., techniques that aim to optimize the Pareto frontier of model quality and training resource/data consumption. We seek to understand the tradeoffs associated with data selection routines based on (i) expensive-to-compute data-quality estimates, and (ii) maximization of coverage and diversity-based measures in the feature space. Our first technique, Ask-LLM, leverages the zero-shot reasoning capabilities of instruction-tuned LLMs to directly assess the quality of a training example. To target coverage, we propose Density sampling, which models the data distribution to select a diverse sample. In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories. Coverage sampling can recover the performance of the full data, while models trained on Ask-LLM data consistently outperform full-data training -- even when we reject 90% of the original dataset, while converging up to 70% faster.

Media 1

📊 Media Metadata

{
  "data": [
    {
      "id": "",
      "type": "photo",
      "url": null,
      "media_url": "https://pbs.twimg.com/media/GGbeeBSWQAAs7im.jpg",
      "media_url_https": null,
      "display_url": null,
      "expanded_url": null
    }
  ],
  "score": 1.0,
  "scored_at": "2025-08-09T13:46:07.555126",
  "import_source": "manual_curation_2024",
  "media": [
    {
      "type": "photo",
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1758337319998828783/media_0.jpg?",
      "filename": "media_0.jpg",
      "original_url": "https://pbs.twimg.com/media/GGbeeBSWQAAs7im.jpg"
    }
  ],
  "storage_migrated": true
}

🔧 Raw API Response

{
  "user": {
    "created_at": "2014-04-27T00:20:12.000Z",
    "default_profile_image": false,
    "description": "AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗)\n\ndm for promo",
    "fast_followers_count": 0,
    "favourites_count": 29401,
    "followers_count": 286820,
    "friends_count": 2354,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 3625,
    "location": "subscribe → ",
    "media_count": 15355,
    "name": "AK",
    "normal_followers_count": 286820,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/2465283662/1610997549",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1451191636810092553/kpM5Fe12_normal.jpg",
    "screen_name": "_akhaliq",
    "statuses_count": 26003,
    "translator_type": "none",
    "url": "https://t.co/TbGnXZJwEc",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "2465283662"
  },
  "id": "1758337319998828783",
  "conversation_id": "1758337319998828783",
  "full_text": "How to Train Data-Efficient LLMs\n\npaper page: https://t.co/xP1PYQURjY\n\nThe training of large language models (LLMs) is expensive. In this paper, we study data-efficient approaches for pre-training LLMs, i.e., techniques that aim to optimize the Pareto frontier of model quality and training resource/data consumption. We seek to understand the tradeoffs associated with data selection routines based on (i) expensive-to-compute data-quality estimates, and (ii) maximization of coverage and diversity-based measures in the feature space. Our first technique, Ask-LLM, leverages the zero-shot reasoning capabilities of instruction-tuned LLMs to directly assess the quality of a training example. To target coverage, we propose Density sampling, which models the data distribution to select a diverse sample. In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories. Coverage sampling can recover the performance of the full data, while models trained on Ask-LLM data consistently outperform full-data training -- even when we reject 90% of the original dataset, while converging up to 70% faster.",
  "reply_count": 2,
  "retweet_count": 62,
  "favorite_count": 244,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/6zamh3yXdI",
      "expanded_url": "https://huggingface.co/papers/2402.09668",
      "display_url": "huggingface.co/papers/2402.09…"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GGbeeBSWQAAs7im.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/_akhaliq/status/1758337319998828783",
  "created_at": "2024-02-16T03:47:40.000Z",
  "#sort_index": "1758337319998828783",
  "view_count": 38946,
  "quote_count": 1,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://twitter.com/_akhaliq/status/1758337319998828783"
}