🐦 Twitter Post Details

Viewing enriched Twitter post

@HowToAI_

🚨 MIT proved you can delete 90% of a neural network without losing accuracy. Researchers found that inside every massive model, there is a "winning ticket”, a tiny subnetwork that does all the heavy lifting. They proved if you find it and reset it to its original state, it performs exactly like the giant version. But there was a catch that killed adoption instantly.. you had to train the massive model first to find the ticket. nobody wanted to train twice just to deploy once. it was a cool academic flex, but useless for production. The original 2018 paper was mind-blowing: But today, after 8 years… We finally have the silicon-level breakthrough we were waiting for: structured sparsity. Modern GPUs (NVIDIA Ampere+) don’t just “simulate” pruning anymore. They have native support for block sparsity (2:4 patterns) built directly into the hardware. It’s not theoretical, it’s silicon-level acceleration. The math is terrifyingly good: a 90% sparse network = 50% less memory bandwidth + 2× compute throughput. Real speed.. zero accuracy loss. Three things just made this production-ready in 2026: - pruning-aware training (you train sparse from day one) - native support in pytorch 2.0 and the apple neural engine - the realization that ai models are 90% redundant by design Evolution over-parameterizes everything. We’re finally learning how to prune. The era of bloated, inefficient models is officially over. The tooling finally caught up to the theory, and the winners are going to be the ones who stop paying for 90% of weights they don’t even need. The future of AI is smaller, faster, and smarter.

Media 1

📊 Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2042207410484654247/media_0.jpg",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/2042207410484654247/media_0.jpg",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2026-04-09T19:36:46.598244",
  "pipeline_version": "2.0"
}

🔧 Raw API Response

{
  "type": "tweet",
  "id": "2042207410484654247",
  "url": "https://x.com/HowToAI_/status/2042207410484654247",
  "twitterUrl": "https://twitter.com/HowToAI_/status/2042207410484654247",
  "text": "🚨 MIT proved you can delete 90% of a neural network without losing accuracy.\n\nResearchers found that inside every massive model, there is a \"winning ticket”, a tiny subnetwork that does all the heavy lifting. \n\nThey proved if you find it and reset it to its original state, it performs exactly like the giant version.\n\nBut there was a catch that killed adoption instantly..\n\nyou had to train the massive model first to find the ticket. nobody wanted to train twice just to deploy once. it was a cool academic flex, but useless for production.\n\nThe original 2018 paper was mind-blowing:\n\nBut today, after 8 years…\n\nWe finally have the silicon-level breakthrough we were waiting for: structured sparsity.\n\nModern GPUs (NVIDIA Ampere+) don’t just “simulate” pruning anymore. \n\nThey have native support for block sparsity (2:4 patterns) built directly into the hardware. \n\nIt’s not theoretical, it’s silicon-level acceleration.\n\nThe math is terrifyingly good: a 90% sparse network = 50% less memory bandwidth + 2× compute throughput. Real speed.. zero accuracy loss.\n\nThree things just made this production-ready in 2026:\n\n- pruning-aware training (you train sparse from day one)\n- native support in pytorch 2.0 and the apple neural engine\n- the realization that ai models are 90% redundant by design\n\nEvolution over-parameterizes everything. We’re finally learning how to prune.\n\nThe era of bloated, inefficient models is officially over. The tooling finally caught up to the theory, and the winners are going to be the ones who stop paying for 90% of weights they don’t even need.\n\nThe future of AI is smaller, faster, and smarter.",
  "source": "Twitter for iPhone",
  "retweetCount": 153,
  "replyCount": 41,
  "likeCount": 979,
  "quoteCount": 18,
  "viewCount": 46284,
  "createdAt": "Thu Apr 09 11:46:03 +0000 2026",
  "lang": "en",
  "bookmarkCount": 716,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "2042207410484654247",
  "displayTextRange": [
    0,
    276
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "HowToAI_",
    "url": "https://x.com/HowToAI_",
    "twitterUrl": "https://twitter.com/HowToAI_",
    "id": "1478392249675300868",
    "name": "How To AI",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/2009866827518881792/Th8eH2US_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1478392249675300868/1768026357",
    "description": "",
    "location": "Earth",
    "followers": 2340,
    "following": 69,
    "status": "",
    "canDm": false,
    "canMediaTag": true,
    "createdAt": "Tue Jan 04 15:46:16 +0000 2022",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 507,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 89,
    "statusesCount": 387,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "2041124232437072086"
    ],
    "profile_bio": {
      "description": "Trustworthy AI education.",
      "entities": {
        "description": {
          "hashtags": [],
          "symbols": [],
          "urls": [],
          "user_mentions": []
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/9jUyhlGegq",
        "expanded_url": "https://twitter.com/HowToAI_/status/2042207410484654247/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": []
          },
          "orig": {
            "faces": []
          }
        },
        "id_str": "2042207265734987776",
        "indices": [
          277,
          300
        ],
        "media_key": "3_2042207265734987776",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARxXYLAn2xAACgACHFdg0dubsKcAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABHFdgsCfbEAAKAAIcV2DR25uwpwAA",
            "media_key": "3_2042207265734987776"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/HFdgsCfbEAAfARG.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 860,
              "w": 1536,
              "x": 0,
              "y": 0
            },
            {
              "h": 1024,
              "w": 1024,
              "x": 64,
              "y": 0
            },
            {
              "h": 1024,
              "w": 898,
              "x": 127,
              "y": 0
            },
            {
              "h": 1024,
              "w": 512,
              "x": 320,
              "y": 0
            },
            {
              "h": 1024,
              "w": 1536,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1024,
          "width": 1536
        },
        "sizes": {
          "large": {
            "h": 1024,
            "w": 1536
          }
        },
        "type": "photo",
        "url": "https://t.co/9jUyhlGegq"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "hashtags": [],
    "symbols": [],
    "urls": [],
    "user_mentions": []
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "communityInfo": null,
  "article": null
}