๐Ÿฆ Twitter Post Details

Viewing enriched Twitter post

@eliebakouch

The technical report of @Meituan_LongCat LongCat-Flash is crazy good and full of novelty. The model is a 560B passive ~27B active MoE with adaptive number of active parameters depending on the context thanks to the Zero-Computational expert. 1) New architecture > Layers have 2 Attention blocks and both FFN and MoE, that way you can overlap the 2 all-to-all coms. (also it's only 28 layers but you have to take into account the 2 attention blocks). > They add the zero-computational expert that tokens can choose and do nothing, kinda like a "sink" for easy tokens. > For load balancing, they have a dsv3-like aux loss free to set the average real/fake expert per token. They apply a decay schedule to this bias update. They also do loss balance control. 2) Scaling > They made changes to MLA/MoE to have variance alignment at init. The gains are pretty impressive in Figure 5, but i don't know to what extent this has impact later on. > Model growth init is pretty cool, they first train a 2x smaller model and then "when it's trained enough" (a bit unclear here how many B tokens) they init the final model by just stacking the layers of the smaller model. > They used @_katieeverett @Locchiu and al. paper to have hyperparameter transfer with SP instead of muP for the 2x smaller model ig. 3) Stability > They track Gradient Norm Ratio and cosine similarity between experts to adjust the weight of the load balancing loss (they recommend Gradient Norm Ratio <0.1). > To avoid large activations, they apply a z-loss to the hidden state, with a pretty small coef (another alternative to qk-clip/norm). > They set Adam epsilon to 1e-16 and show that you want it to be lower than the gradient RMS range. 4) Others > They train on 20T tokens for phase 1, "multiple T of tokens" for mid training on STEM/code data (70% of the mixture), 100B for long context extension without yarn (80B for 32k, 20B for 128k). The long context documents represent 25% of the mixture (not sure if it's % of documents or tokens, which changes a lot here). > Pre-training data pipeline is context extraction, quality filtering, dedup. > Nice appendix where they show they compare top_k needed for different benchmarks (higher MMLU with 8.32, lower GSM8K with 7.46). They also compare token allocation in deep/shallow layers. > They release two new benchmarks Meeseeks (multi-turn IF) and VitaBench (real-world business scenario). > Lots of details in the infra/inference with info on speculative decoding acceptance, quantization, deployment, kernel optimization, coms overlapping, etc. > List of the different relevent paper in thread ๐Ÿงต

Media 1

๐Ÿ“Š Media Metadata

{
  "media": [
    {
      "url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1961999252311204147/media_0.jpg?",
      "media_url": "https://crmoxkoizveukayfjuyo.supabase.co/storage/v1/object/public/media/posts/1961999252311204147/media_0.jpg?",
      "type": "photo",
      "filename": "media_0.jpg"
    }
  ],
  "processed_at": "2025-09-01T20:04:38.531427",
  "pipeline_version": "2.0"
}

๐Ÿ”ง Raw API Response

{
  "type": "tweet",
  "id": "1961999252311204147",
  "url": "https://x.com/eliebakouch/status/1961999252311204147",
  "twitterUrl": "https://twitter.com/eliebakouch/status/1961999252311204147",
  "text": "The technical report of @Meituan_LongCat LongCat-Flash is crazy good and full of novelty.\nThe model is a 560B passive ~27B active MoE with adaptive number of active parameters depending on the context thanks to the Zero-Computational expert.\n\n1) New architecture\n> Layers have 2 Attention blocks and both FFN and MoE, that way you can overlap the 2 all-to-all coms. (also it's only 28 layers but you have to take into account the 2 attention blocks).\n> They add the zero-computational expert that tokens can choose and do nothing, kinda like a \"sink\" for easy tokens.\n> For load balancing, they have a dsv3-like aux loss free to set the average real/fake expert per token. They apply a decay schedule to this bias update. They also do loss balance control.\n\n2) Scaling\n> They made changes to MLA/MoE to have variance alignment at init. The gains are pretty impressive in Figure 5, but i don't know to what extent this has impact later on.\n> Model growth init is pretty cool, they first train a 2x smaller model and then \"when it's trained enough\" (a bit unclear here how many B tokens) they init the final model by just stacking the layers of the smaller model.\n> They used @_katieeverett @Locchiu and al. paper to have hyperparameter transfer with SP instead of muP for the 2x smaller model ig.\n\n3) Stability\n> They track Gradient Norm Ratio and cosine similarity between experts to adjust the weight of the load balancing loss (they recommend Gradient Norm Ratio <0.1).\n> To avoid large activations, they apply a z-loss to the hidden state, with a pretty small coef (another alternative to qk-clip/norm).\n> They set Adam epsilon to 1e-16 and show that you want it to be lower than the gradient RMS range.\n\n4) Others\n> They train on 20T tokens for phase 1, \"multiple T of tokens\" for mid training on STEM/code data (70% of the mixture), 100B for long context extension without yarn (80B for 32k, 20B for 128k). The long context documents represent 25% of the mixture (not sure if it's % of documents or tokens, which changes a lot here).\n> Pre-training data pipeline is context extraction, quality filtering, dedup.\n> Nice appendix where they show they compare top_k needed for different benchmarks (higher MMLU with 8.32, lower GSM8K with 7.46). They also compare token allocation in deep/shallow layers.\n> They release two new benchmarks Meeseeks (multi-turn IF) and VitaBench (real-world business scenario).\n> Lots of details in the infra/inference with info on speculative decoding acceptance, quantization, deployment, kernel optimization, coms overlapping, etc.\n> List of the different relevent paper in thread ๐Ÿงต",
  "source": "Twitter for iPhone",
  "retweetCount": 101,
  "replyCount": 17,
  "likeCount": 579,
  "quoteCount": 17,
  "viewCount": 93410,
  "createdAt": "Sun Aug 31 03:47:28 +0000 2025",
  "lang": "en",
  "bookmarkCount": 491,
  "isReply": false,
  "inReplyToId": null,
  "conversationId": "1961999252311204147",
  "displayTextRange": [
    0,
    282
  ],
  "inReplyToUserId": null,
  "inReplyToUsername": null,
  "author": {
    "type": "user",
    "userName": "eliebakouch",
    "url": "https://x.com/eliebakouch",
    "twitterUrl": "https://twitter.com/eliebakouch",
    "id": "1745892418539417600",
    "name": "elie",
    "isVerified": false,
    "isBlueVerified": true,
    "verifiedType": null,
    "profilePicture": "https://pbs.twimg.com/profile_images/1745893660099592193/MmYemsw6_normal.jpg",
    "coverPicture": "https://pbs.twimg.com/profile_banners/1745892418539417600/1751243891",
    "description": "",
    "location": "",
    "followers": 5469,
    "following": 3010,
    "status": "",
    "canDm": true,
    "canMediaTag": true,
    "createdAt": "Fri Jan 12 19:36:21 +0000 2024",
    "entities": {
      "description": {
        "urls": []
      },
      "url": {}
    },
    "fastFollowersCount": 0,
    "favouritesCount": 11092,
    "hasCustomTimelines": true,
    "isTranslator": false,
    "mediaCount": 282,
    "statusesCount": 3110,
    "withheldInCountries": [],
    "affiliatesHighlightedLabel": {},
    "possiblySensitive": false,
    "pinnedTweetIds": [
      "1942614640480961003"
    ],
    "profile_bio": {
      "description": "Training llm's (now: @huggingface)",
      "entities": {
        "description": {
          "user_mentions": [
            {
              "id_str": "0",
              "indices": [
                21,
                33
              ],
              "name": "",
              "screen_name": "huggingface"
            }
          ]
        },
        "url": {
          "urls": [
            {
              "display_url": "huggingface.co/eliebak",
              "expanded_url": "https://huggingface.co/eliebak",
              "indices": [
                0,
                23
              ],
              "url": "https://t.co/Rhb0otAbl1"
            }
          ]
        }
      }
    },
    "isAutomated": false,
    "automatedBy": null
  },
  "extendedEntities": {
    "media": [
      {
        "allow_download_status": {
          "allow_download": true
        },
        "display_url": "pic.twitter.com/MkduoLUIvX",
        "expanded_url": "https://twitter.com/eliebakouch/status/1961999252311204147/photo/1",
        "ext_media_availability": {
          "status": "Available"
        },
        "features": {
          "large": {},
          "orig": {}
        },
        "id_str": "1961998855739691008",
        "indices": [
          283,
          306
        ],
        "media_key": "3_1961998855739691008",
        "media_results": {
          "id": "QXBpTWVkaWFSZXN1bHRzOgwAAQoAARs6a5BuFlAACgACGzpr7MOXYTMAAA==",
          "result": {
            "__typename": "ApiMedia",
            "id": "QXBpTWVkaWE6DAABCgABGzprkG4WUAAKAAIbOmvsw5dhMwAA",
            "media_key": "3_1961998855739691008"
          }
        },
        "media_url_https": "https://pbs.twimg.com/media/GzprkG4WUAA1aQA.jpg",
        "original_info": {
          "focus_rects": [
            {
              "h": 955,
              "w": 1706,
              "x": 0,
              "y": 0
            },
            {
              "h": 1706,
              "w": 1706,
              "x": 0,
              "y": 0
            },
            {
              "h": 1818,
              "w": 1595,
              "x": 0,
              "y": 0
            },
            {
              "h": 1818,
              "w": 909,
              "x": 136,
              "y": 0
            },
            {
              "h": 1818,
              "w": 1706,
              "x": 0,
              "y": 0
            }
          ],
          "height": 1818,
          "width": 1706
        },
        "sizes": {
          "large": {
            "h": 1818,
            "w": 1706
          }
        },
        "type": "photo",
        "url": "https://t.co/MkduoLUIvX"
      }
    ]
  },
  "card": null,
  "place": {},
  "entities": {
    "user_mentions": [
      {
        "id_str": "1961699105434210304",
        "indices": [
          24,
          40
        ],
        "name": "Meituan LongCat",
        "screen_name": "Meituan_LongCat"
      },
      {
        "id_str": "1666204921",
        "indices": [
          1174,
          1188
        ],
        "name": "Katie Everett",
        "screen_name": "_katieeverett"
      },
      {
        "id_str": "72890547",
        "indices": [
          1189,
          1197
        ],
        "name": "Lechao Xiao",
        "screen_name": "Locchiu"
      }
    ]
  },
  "quoted_tweet": null,
  "retweeted_tweet": null,
  "isLimitedReply": false,
  "article": null
}