🐦 Twitter Post Details

Viewing enriched Twitter post

@mervenoyann

AutoGPTQ is now natively supported in transformers! 🀩 AutoGPTQ is a library for GPTQ, a post-training quantization technique to quantize autoregressive generative LLMs. 🦜 With this integration, you can quantize LLMs with few lines of code! Read more πŸ‘‰ https://t.co/QPW3DJ0erm https://t.co/9OFACotev7

Media 1

πŸ“Š Media Metadata

{
  "media": [
    {
      "url": "https://pbs.twimg.com/media/F4OfN7WWYAATZum.jpg",
      "type": "photo",
      "original_url": "https://pbs.twimg.com/media/F4OfN7WWYAATZum.jpg",
      "format_converted_from_list": true
    }
  ],
  "conversion_date": "2025-08-13T00:32:44.409897",
  "format_converted": true,
  "original_structure": "had_media_only"
}

πŸ”§ Raw API Response

{
  "user": {
    "created_at": "2019-12-04T16:45:25.000Z",
    "default_profile_image": false,
    "description": "open-sourceress working at @huggingface πŸ§™πŸ»β€β™€οΈπŸ€—  @GoogleDevExpert in Machine Learning 🧑 MScc in Data Science",
    "fast_followers_count": 0,
    "favourites_count": 36056,
    "followers_count": 47729,
    "friends_count": 3580,
    "has_custom_timelines": true,
    "is_translator": false,
    "listed_count": 543,
    "location": "12Γ¨me, Paris",
    "media_count": 2893,
    "name": "merve",
    "normal_followers_count": 47729,
    "possibly_sensitive": false,
    "profile_banner_url": "https://pbs.twimg.com/profile_banners/1202267633049100291/1640516441",
    "profile_image_url_https": "https://pbs.twimg.com/profile_images/1667838713471156224/I3bK25fg_normal.jpg",
    "screen_name": "mervenoyann",
    "statuses_count": 20097,
    "translator_type": "none",
    "url": "https://t.co/EqdkADVk8r",
    "verified": false,
    "withheld_in_countries": [],
    "id_str": "1202267633049100291"
  },
  "id": "1694373167169720633",
  "conversation_id": "1694373167169720633",
  "full_text": "AutoGPTQ is now natively supported in transformers! 🀩\nAutoGPTQ is a library for GPTQ, a post-training quantization technique to quantize  autoregressive generative LLMs. 🦜\nWith this integration, you can quantize LLMs with few lines of code!\nRead more πŸ‘‰ https://t.co/QPW3DJ0erm https://t.co/9OFACotev7",
  "reply_count": 1,
  "retweet_count": 30,
  "favorite_count": 163,
  "hashtags": [],
  "symbols": [],
  "user_mentions": [],
  "urls": [
    {
      "url": "https://t.co/QPW3DJ0erm",
      "expanded_url": "https://hf.co/blog/gptq-integration",
      "display_url": "hf.co/blog/gptq-inte…"
    }
  ],
  "media": [
    {
      "media_url": "https://pbs.twimg.com/media/F4OfN7WWYAATZum.jpg",
      "type": "photo"
    }
  ],
  "url": "https://twitter.com/mervenoyann/status/1694373167169720633",
  "created_at": "2023-08-23T15:36:57.000Z",
  "#sort_index": "1694373167169720633",
  "view_count": 24706,
  "quote_count": 0,
  "is_quote_tweet": false,
  "is_retweet": false,
  "is_pinned": false,
  "is_truncated": false,
  "startUrl": "https://twitter.com/mervenoyann/status/1694373167169720633"
}