@Alibaba_Qwen
π₯ Qwen 3.5 Series GPTQ-Int4 weights are live. Native vLLM & SGLang support. β‘οΈ Less VRAM. Faster inference. Run powerful models on limited-GPU setups. π Grab the weights + example code: Hugging Face: https://t.co/3MSb7miq68 ModelScope: https://t.co/LGHruBHP6Q