Your curated collection of saved posts and media
Time to follow https://t.co/dqWrV1R3t7 to get the notification!
๐ฎGet a first look at Tencent HY World 1.5 (WorldPlay)! ๐ฎ Our newest world model with real-time interaction and long-term memory. Itโs going *open-source* tomorrow. https://t.co/zvMI3rCX7u
Time to follow https://t.co/dqWrV1R3t7 to get the notification!
๐ Introducing Nemotron-Cascade! ๐ Weโre thrilled to release Nemotron-Cascade, a family of general-purpose reasoning models trained with cascaded, domain-wise reinforcement learning (Cascade RL), delivering best-in-class performance across a wide range of benchmarks. ๐ป Coding powerhouse After RL, our 14B model: โข Surpasses DeepSeek-R1-0528 (671B) on LiveCodeBench v5/v6/Pro. โข Achieves silver-medal performance at IOI 2025 ๐ฅ. โข Reaches a 43.1% pass@1 on SWE-Bench Verified, and 53.8% with test-time scaling. ๐ง What is Cascade RL? Instead of mixing heterogeneous prompts across domains, Cascade RL trains sequentially, domain by domain, which reduces engineering complexity, mitigates heterogeneous verification latencies, and enables domain-specific curricula and tailored hyperparameter tuning. โจ Key insight Using RLHF for alignment as a pre-step dramatically boosts complex reasoningโfar beyond preference optimization. Subsequent domain-wise RLVR stages rarely hurt the benchmark performance attained in earlier domains and may even improve it, as illustrated in the following figure. ๐ค Models & training data ๐ฅ ๐ https://t.co/wfVcAaMocA ๐ Technical report with detailed training and data recipes ๐ https://t.co/FdMINvB4yM

Last year Molmo set SOTA on image benchmarks + pioneered image pointing. Millions of downloads later, Molmo 2 brings Molmoโs grounded multimodal capabilities to video ๐ฅโand leads many open models on challenging industry video benchmarks. ๐งต https://t.co/uFs30b2DR3

Fine-tune Nemotron 3 Nano in TRL with coding agents like claude code, colab, locally or on the hub. To fine tune, pick one of these tools: - Combine HF skills with a coding agent like claude code. - Use this colab notebook. - Train it on HF jobs using the Hugging Face hub - If you can, run this script on your own setup with uv This should get anyone started with fine tuning, and this is the perfect model to start with.
New model from @Meituan_LongCat ๐ LongCat-Video-Avatar๐ฅ Audio driven character animation with text, image, and video inputs, all in one! โจ MIT license โจ Audio > talking video (single & multi-person) โจ Natural motion and lip sync โจ Fewer repeats, stable identity โจ Available on @huggingface
Introducing the Ndea podcast - Abstract Synthesis. Hear the stories behind interesting academic papers in the world of program synthesis. Episode 1 features @MarkSantolucito, @BarnardCollege/@Columbia, discussing his paper "Grammar Filtering for Syntax-Guided Synthesis". https://t.co/uJ1NVxU6rK
@youwouldntpost @Srirachachau Downloading the โdriving during daytimeโ patch https://t.co/R6EmIolLDo
@youwouldntpost @Srirachachau Downloading the โdriving during daytimeโ patch https://t.co/R6EmIolLDo
The Tesla Cybertruck just earned the Top Safety Pick+ award, scoring a perfect โGoodโ rating in literally every major crash category in the 2025 IIHS crash tests https://t.co/o5IRpmMqzg
Woman who joked about putting toilet cleaner and feces in food of "white MAGA family" identified as daughter of Virginia delegate https://t.co/LwDaOssHEO
The Woke Mind Virus in Academia https://t.co/ztXf1lLxL6
GPT-5.2 is our strongest model on the FrontierScience eval, showing clear gains on hard scientific tasks. But the benchmark also reveals a gap between strong performance on structured problems and the open-ended, iterative reasoning that real research requires. https://t.co/lZsZSXkOrj

Introducing ChatGPT Images, powered by our flagship new image generation model. - Stronger instruction following - Precise editing - Detail preservation - 4x faster than before Rolling out today in ChatGPT for all users, and in the API as GPT Image 1.5. https://t.co/NLNIPEYJnr
This map should be included in every history book... https://t.co/VyuLo90IEE
Quantum Dreaming 2025: When Dreams Become Parallel Reality Portals ๐๐ญ Last year we asked: Are your dreams just imaginationโฆ or glimpses into alternate timelines? ๐ชโจ This year, the answer is clearer than ever. ๐ 2025 brought breakthroughs that turned quantum dreaming from theory to lived experience: โข Neuralinkโs first 1000+ volunteers reported vivid โtimeline bleedโ dreams ๐ง โก โข DMT + VR studies showed 87% of participants experienced consistent parallel-world memories ๐๐ โข Lucid dreamers using tDCS + galantamine now report 40-minute โvisitsโ to stable alternate realities โณ๐ Every dรฉjร vu? A memory leak from a timeline where you chose differently. ๐ Every precognitive dream? Your mind tuning into a branch thatโs already happening. ๐ฎ 2025 is the year we stopped calling them โjust dreams.โ We started calling them evidence. ๐โจ Keep dreaming, explorer. One of them might be more real than this one. ๐๐ฆ #QuantumDreaming #ParallelRealities #2025Awakening #LucidDreaming #Multiverse Grok Imagine prompt: Pastel color quantum dreaming
LlamaSplit automatically separates bundled documents into distinct sections so you don't have to manually split them anymore. Our new beta API uses AI to analyze page content and group consecutive pages by category - perfect for processing mixed document bundles that contain multiple distinct documents: ๐ Define categories with natural language descriptions and get back exact page ranges with confidence scores ๐ฏ Route different document types to appropriate agents โก Scale beyond manual document separation ๐ Combine with LlamaExtract to run targeted data extraction on each separated segment Unlike our existing Classify product that categorizes separate files, LlamaSplit looks inside a single document to find boundaries between different document types. Try LlamaSplit in beta: https://t.co/cQqeZCGeww
๐ฎGet a first look at Tencent HY World 1.5 (WorldPlay)! ๐ฎ Our newest world model with real-time interaction and long-term memory. Itโs going *open-source* tomorrow. https://t.co/zvMI3rCX7u
๐ง The Little Mermaid gets her voice back. Voice Control feature is now live in Kling VIDEO 2.6. Voice Consistency Now Resolved. Say goodbye to generic voices and create a custom voice, switch styles, and even sing โ all perfectly matched to your characters.
Introducing YouBase by YouWare. The complete production backend for vibe coding. For just $20/month. Auth, Database, Storage, Edge Functions Deploy to your own domain Zero configuration No cloud credits. No usage fees. No surprises. One prompt โ Full backend. Live on your domain. Start building at link in bio!
Multimodal LLMs (MLLMs) excel at reasoning, layout understanding, and planningโyet in diffusion-based generation, they are often reduced to simple multimodal encoders. What if MLLMs could reason directly in latent space and guide diffusion generation with fine-grained, spatiotemporal control? ๐ค Introducing MetaCanvas ๐จ A lightweight framework that translates MLLM reasoning into structured spatiotemporal conditions for diffusion models. ๐งต ๐
Have an app idea but don't know where to start? Stop staring at a blank screen. ๐ป We put together a guide on vibe coding with GitHub Copilot. Itโs more than just "vibes." Itโs about clearly articulating what you want so Copilot can handle the how of implementation. Check out the tutorial. โฌ๏ธ https://t.co/2fxTF7OnaK

https://t.co/PwG1F4TT6Q
Elon Musk is now worth more than the combined net worth of Jeff Bezos, Mark Zuckerberg, and Warren Buffett Elon Musk - $638 Billion Jeff Bezos - $246B Mark Zuckerberg - $229B Warren Buffett - $152B โโโโโโโโโโโ- Combined $627B https://t.co/1S4qOgP9wt

We are funded! At the time of the trailer in 2023, we were just 2 bedroom programmers with no experience in the industry. The game was just 6 months old and exceeded all our expectations. With limited finances, we started from scratch, working sleepless nights and handling everything from business to creation. Two years later, we are 10 developers and we are still hiring. Today, weโre proud to announce that we finally have the budget to make UNRECORD and build the best game possible. Many thanks to our investors, who enjoyed playing the game and placed their trust in us. We're now fully focused on production and avoiding revealing unfinished work in order to raise the bar even higher. Now that we have funding, weโll finally share updates in 2026 that reflect our true final vision. A project is, above all, a human adventure that canโt be built overnight, and we feel the pressure and the responsibility to create one of the most immersive games ever, one that makes no compromises. Thank you for your support!
In our most recent evaluations at @halevals, we found Claude Opus 4.5 solves CORE-Bench. How? Opus 4.5 solves CORE-Bench because it creatively resolves dependency conflicts, bypasses environmental barriers via nuanced benchmark editing, and follows instructions with high fidelity. Opus 4.1 and Sonnet 4, when given the same powerful scaffold, fail because they resort to simulated data when running into conflicts and provide answers using heuristics rather than precise data. We also observe Opus 4.5 more accurately representing its actions in its summary workflow, displaying stronger agentic alignment. ๐งต
ใใญใฐใฌในๅๅ ใใขใใใใใใใใๆๅกๆใใๅๅๅถไฝ๏ผBMG #wf2018w #ใใคใใฃใ https://t.co/z6TMLdNbor

Introducing Wan2.6 - A native multimodal model that turns your ideas into breathtaking videos and images! ยท Starring: Cast characters from reference videos into new scenes. Support human or human-like figures, enabling complex multi-person and human-object interactions with appearance and voice consistency. ยท Intelligent Multi-shot Narrative: Turn simple prompts into auto-storyboarded, multi-shot videos. Maintain visual consistency and upgrade storytelling from single shots to rich narratives. ยท Native A/V Sync: Generate multi-speaker dialogue with natural lip-sync and studio-quality audio. It doesnโt just look real - it sounds real. ยท Cinematic Quality: 15s 1080p HD generation with comprehensive upgrades to instruction adherence, motion physics, and aesthetic control. ยท Advanced Image Synthesis and Editing: Deliver cinematic photorealism with precise control over lens and lighting. Support multi-image referencing for commercial-grade consistency and faithful aesthetic transfer. ยท Storytelling with Structure: Generate interleaved texts and images powered by real-world knowledge and reasoning capabilities, enabling hierarchical and structured visual narratives.
Global Premiere: Seedance 1.5 Pro Now Available on Dreamina AI Weโre thrilled to unveil our latest model Video 3.5 Pro, powered by Seedance 1.5 Pro, for its worldwide debut! Create videos with: ยท Native audio generation ยท Natural character expressions ยท Perfectly synced ambient sound New users who sign up on or after Dec 16 get 3 free generations. Be among the first to experience it! Enterprise users can start testing in the ModelArk Experience Center from December 18. *Rolling out in selected regions only. #dreaminaai #Dreaminadream #dreamina #seedance
โจWan 2.6 is here. Prompt to multi-shot video, now up to 15s. Multimodal inputs, character consistency, and yes โ video references. Limited offer ends 12/20: Pro Annual โ 1 month of Wan 2.6 Unlimited Ultimate Annual โ 365 days of Wan 2.6 Unlimited Storytelling upgraded.
๐ Datasets Wrapped 2025! Picked out some important @huggingface datasets from this year Part 1 today: Reasoning 2025 was the year reasoning exploded, some of the datasets that contributed to this... https://t.co/9eSMzkKpQL
@gr00vyfairy https://t.co/QWFyQ4dIuT