🐦 Twitter Post Details

Viewing enriched Twitter post

@_philschmid

How does quantization impact the performance of LLMs? Only minimal! 🤯 @neuralmagic ran 500,000 different evaluations on @AIatMeta Llama using different quantization strategies. The impact is <1%, but the benefits are up to 2.4 faster inference and 3.5 model size reduction! 🔥 TL;DR; 💯 Quantized models achieve 99% accuracy recovery compared to full-precision 🚀 Up to 2.4x speedup and 3.5x model size reduction with quantization. 📊 Tested Llama 3.1 8B, 70B, and 405B models on OpenLLM Leaderboard, ArenaHard, HumanEval, and text similarity metrics. 🥇W8A8-FP8 dynamic yields the best results 🤗 Quantized models available on @huggingface.

🔧 Raw API Response

{
  "user": {
    "created_at": "2019-06-18T18:39:49.000Z",
    "default_profile_image": false,
    "description": "Tech Lead and LLMs at @huggingface 👨🏻‍💻 🤗  AWS ML Hero 🦸🏻 | Cloud & ML enthusiast | 📍Nuremberg | 🇩🇪 https://t.co/l1ppq3q3hk",
    "fast_followers_count": 0,
    "favourites_count": 5136,
    "followers_count": 27723,
    "friends_count": 820,
    "has_custom_timelines": false,
    "is_translator": false,
    "listed_count": 656,
    "location": "Nürnberg",
    "media_count": 999,
    "name": "Philipp Schmid",
    "normal_followers_count": 27723,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/1141052916570214400/1725456070",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1831321531852496896/1yBZG884_normal.jpg",
    "screen_name": "_philschmid",
    "statuses_count": 3072,
    "translator_type": "none",
    "url": "https://t.co/8BDXIK6omb",
    "verified": true,
    "withheld_in_countries": [],
    "id_str": "1141052916570214400"
  },
  "id": "1847281289986003043",
  "conversation_id": "1847281289986003043",
  "full_text": "How does quantization impact the performance of LLMs? Only minimal! 🤯 @neuralmagic ran 500,000 different evaluations on @AIatMeta Llama using different quantization strategies. The impact is <1%, but the benefits are up to 2.4 faster inference and 3.5 model size reduction! 🔥\n\nTL;DR;\n💯 Quantized models achieve 99% accuracy recovery compared to full-precision\n🚀 Up to 2.4x speedup and 3.5x model size reduction with quantization.\n📊 Tested Llama 3.1 8B, 70B, and 405B models on OpenLLM Leaderboard, ArenaHard, HumanEval, and text similarity metrics.\n🥇W8A8-FP8 dynamic yields the best results\n🤗 Quantized models available on @huggingface.",
  "reply_count": 22,
  "retweet_count": 81,
  "favorite_count": 492,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [
    {
      "id_str": "997536616481722369",
      "name": "Neural Magic",
      "screen_name": "neuralmagic",
      "profile": "https://twitter.com/neuralmagic"
    },
    {
      "id_str": "1034844617261248512",
      "name": "AI at Meta",
      "screen_name": "AIatMeta",
      "profile": "https://twitter.com/AIatMeta"
    }
  ],
  "urls": [],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/GaLcfWsWMAAtmmv.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/_philschmid/status/1847281289986003043",
  "created_at": "2024-10-18T14:19:15.000Z",
  "#sort_index": "1847281289986003043",
  "view_count": 68119,
  "quote_count": 12,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": true,
  "startUrl": "https://x.com/_philschmid/status/1847281289986003043"
}