๐Ÿฆ Twitter Post Details

Viewing enriched Twitter post

@gordic_aleksa

New in-depth blog post - "Inside vLLM: Anatomy of a High-Throughput LLM Inference System". Probably the most in depth explanation of how LLM inference engines and vLLM in particular work! Took me a while to get this level of understanding of the codebase and then to write up this one - i quickly realized i understimated the effort. ๐Ÿ˜… It could have easily been a book/booklet (lol). I covered: * Basics of inference engine flow (input/output request processing, scheduling, paged attention, continuous batching) * "Advanced" stuff: chunked prefill, prefix caching, guided decoding (grammar-constrained FSM), speculative decoding, disaggregated P/D * Scaling up: going from smaller LMs that can be hosted on a single GPU all the way to trillion+ params (via TP/PP/SP) -> multi-GPU, multi-node setup * Serving the model on the web: going from offline deployment to multiple API servers, load balancing, DP coordinator, multiple engines setup :) * Measuring perf of inference systems (latency (ttft, itl, e2e, tpot), throughput) and GPU perf roofline model Lots of examples, lots of visuals! --- I realize i've been silent on social - many of you noticed and thanks for reaching out! :) --> I'm so back! lots of things happened. Also, in general, I'm a bit sick of superficial content, it really is an equivalent of junk food (h/t @karpathy). I want to do the best/deepest technical work of my life over the next years and write much more in depth (high quality organic food ;)) so I might not be as frequent around here as i used to be (? we'll see). I'll make it a goal to share a few paper summaries a week or stuff that's relevant / in the zeitgeist. If you have any topics that happened over the past few weeks/months drop it down in the comments i might focus on some of those in my next posts. --- Huge thank you to @Hyperstackcloud for giving me an H100 node to run some of the experiments and analysis that i needed to write this up. The team there led by Christopher Starkey is amazing! Also a big thank you to Nick Hill (who did a very thorough review of the post - basically a code review lol; Nick's a core vLLM contributor and principal SWE at RedHat) and to my friends Kyle Krannen (NVIDIA Dynamo), @marksaroufim (PyTorch), and @ashVaswani (goat) for taking the time during weekend when they didn't have to!

Media 1

๐Ÿ“Š Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1962545137613173124/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1962545137613173124/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-09-01T19:51:36.069065",
  "pipeline_version": "2.0"
}

๐Ÿ”ง Raw API Response

{
  "type": "tweet",
  "id": "1962545137613173124",
  "url": "https://x.com/gordic_aleksa/status/1962545137613173124",
  "twitterUrl": "https://twitter.com/gordic_aleksa/status/1962545137613173124",
  "text": "New in-depth blog post - \"Inside vLLM: Anatomy of a High-Throughput LLM Inference System\". Probably the most in depth explanation of how LLM inference engines and vLLM in particular work!\n\nTook me a while to get this level of understanding of the codebase and then to write up this one - i quickly realized i understimated the effort. ๐Ÿ˜… It could have easily been a book/booklet (lol).\n\nI covered:\n\n* Basics of inference engine flow (input/output request processing, scheduling, paged attention, continuous batching)\n\n* \"Advanced\" stuff: chunked prefill, prefix caching, guided decoding (grammar-constrained FSM), speculative decoding, disaggregated P/D\n\n* Scaling up: going from smaller LMs that can be hosted on a single GPU all the way to trillion+ params (via TP/PP/SP) -> multi-GPU, multi-node setup\n\n* Serving the model on the web: going from offline deployment to multiple API servers, load balancing, DP coordinator, multiple engines setup :)\n\n* Measuring perf of inference systems (latency (ttft, itl, e2e, tpot), throughput) and GPU perf roofline model\n\nLots of examples, lots of visuals!\n\n---\n\nI realize i've been silent on social - many of you noticed and thanks for reaching out! :) --> I'm so back! lots of things happened.\n\nAlso, in general, I'm a bit sick of superficial content, it really is an equivalent of junk food (h/t @karpathy).\n\nI want to do the best/deepest technical work of my life over the next years and write much more in depth (high quality organic food ;)) so I might not be as frequent around here as i used to be (? we'll see). I'll make it a goal to share a few paper summaries a week or stuff that's relevant / in the zeitgeist.\n\nIf you have any topics that happened over the past few weeks/months drop it down in the comments i might focus on some of those in my next posts.\n\n---\n\nHuge thank you to @Hyperstackcloud for giving me an H100 node to run some of the experiments and analysis that i needed to write this up. The team there led by Christopher Starkey is amazing!\n\nAlso a big thank you to Nick Hill (who did a very thorough review of the post - basically a code review lol; Nick's a core vLLM contributor and principal SWE at RedHat) and to my friends Kyle Krannen (NVIDIA Dynamo), @marksaroufim (PyTorch), and @ashVaswani (goat) for taking the time during weekend when they didn't have to!",
  "source": "Twitter for iPhone",
  "retweetCount": 129,
  "replyCount": 22,
  "likeCount": 870,
  "quoteCount": 18,
  "viewCount": 62201,
  "createdAt": "Mon Sep 01 15:56:37 +0000 2025",
  "lang": "en",
  "bookmarkCount": 1208,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1962545137613173124",
  "displayTextRange": [
    0,
    277
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "gordic_aleksa",
    "url": "https://x.com/gordic_aleksa",
    "twitterUrl": "https://twitter.com/gordic_aleksa",
    "id": "907007346546810881",
    "name": "Aleksa Gordiฤ‡ (ๆฐดๅนณ้—ฎ้ข˜)",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1961835942957985792/JsXmqrBl_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/907007346546810881/1710762036",
    "description": "",
    "location": "San Francisco, CA",
    "followers": 23985,
    "following": 231,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Sun Sep 10 22:26:17 +0000 2017",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 7569,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 872,
    "statusesCount": 4734,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1837881845331251574"
    ],
    "profile_bio": {
      "description": "getting us to singularity with friends\n\nx @GoogleDeepMind @Microsoft\n\ntensor core maximalist",
      "entities": {
        "description": {
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                42,
                57
              ],
              "name": "",
              "screen_name": "GoogleDeepMind"
            },
            {
              "id_str": "0",
              "indices": [
                58,
                68
              ],
              "name": "",
              "screen_name": "Microsoft"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "aleksagordic.com",
              "expanded_url": "https://www.aleksagordic.com/",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/FJnDY3haei"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/F2wsYaFO7q",
        "expanded_url": "https://twitter.com/gordic_aleksa/status/1962545137613173124/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {
            "faces": [
              {
                "h": 93,
                "w": 93,
                "x": 39,
                "y": 1185
              },
              {
                "h": 665,
                "w": 665,
                "x": 67,
                "y": 47
              }
            ]
          },
          "orig": {
            "faces": [
              {
                "h": 123,
                "w": 123,
                "x": 52,
                "y": 1563
              },
              {
                "h": 878,
                "w": 878,
                "x": 89,
                "y": 63
              }
            ]
          }
        },
        "id_str": "1962544891290177536",
        "indices": [
          278,
          301
        ],
        "media_key": "3_1962544891290177536",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARs8XC49G9AACgACGzxcZ5cacYQAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABGzxcLj0b0AAKAAIbPFxnlxpxhAAA",
            "media_key": "3_1962544891290177536"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/GzxcLj0b0AAvCBn.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 1024,
              "w": 1829,
              "x": 0,
              "y": 364
            },
            {
              "h": 1829,
              "w": 1829,
              "x": 0,
              "y": 0
            },
            {
              "h": 2085,
              "w": 1829,
              "x": 0,
              "y": 0
            },
            {
              "h": 2700,
              "w": 1350,
              "x": 0,
              "y": 0
            },
            {
              "h": 2700,
              "w": 1829,
              "x": 0,
              "y": 0
            }
          ],
          "height": 2700,
          "width": 1829
        },
        "sizes": {
          "large": {
            "h": 2048,
            "w": 1387
          }
        },
        "type": "photo",
        "url": "https://t.co/F2wsYaFO7q"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "user_mentions": [
      {
        "id_str": "33836629",
        "indices": [
          1340,
          1349
        ],
        "name": "Andrej Karpathy",
        "screen_name": "karpathy"
      },
      {
        "id_str": "1668553134262607877",
        "indices": [
          1836,
          1852
        ],
        "name": "Hyperstack",
        "screen_name": "Hyperstackcloud"
      },
      {
        "id_str": "35473191",
        "indices": [
          2228,
          2241
        ],
        "name": "Mark Saroufim",
        "screen_name": "marksaroufim"
      },
      {
        "id_str": "874887507174981633",
        "indices": [
          2257,
          2268
        ],
        "name": "Ashish Vaswani",
        "screen_name": "ashVaswani"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}