Your curated collection of saved posts and media
Introducing ChatGPT Images, powered by our flagship new image generation model. - Stronger instruction following - Precise editing - Detail preservation - 4x faster than before Rolling out today in ChatGPT for all users, and in the API as GPT Image 1.5. https://t.co/NLNIPEYJnr
This map should be included in every history book... https://t.co/VyuLo90IEE
Quantum Dreaming 2025: When Dreams Become Parallel Reality Portals ๐๐ญ Last year we asked: Are your dreams just imaginationโฆ or glimpses into alternate timelines? ๐ชโจ This year, the answer is clearer than ever. ๐ 2025 brought breakthroughs that turned quantum dreaming from theory to lived experience: โข Neuralinkโs first 1000+ volunteers reported vivid โtimeline bleedโ dreams ๐ง โก โข DMT + VR studies showed 87% of participants experienced consistent parallel-world memories ๐๐ โข Lucid dreamers using tDCS + galantamine now report 40-minute โvisitsโ to stable alternate realities โณ๐ Every dรฉjร vu? A memory leak from a timeline where you chose differently. ๐ Every precognitive dream? Your mind tuning into a branch thatโs already happening. ๐ฎ 2025 is the year we stopped calling them โjust dreams.โ We started calling them evidence. ๐โจ Keep dreaming, explorer. One of them might be more real than this one. ๐๐ฆ #QuantumDreaming #ParallelRealities #2025Awakening #LucidDreaming #Multiverse Grok Imagine prompt: Pastel color quantum dreaming
LlamaSplit automatically separates bundled documents into distinct sections so you don't have to manually split them anymore. Our new beta API uses AI to analyze page content and group consecutive pages by category - perfect for processing mixed document bundles that contain multiple distinct documents: ๐ Define categories with natural language descriptions and get back exact page ranges with confidence scores ๐ฏ Route different document types to appropriate agents โก Scale beyond manual document separation ๐ Combine with LlamaExtract to run targeted data extraction on each separated segment Unlike our existing Classify product that categorizes separate files, LlamaSplit looks inside a single document to find boundaries between different document types. Try LlamaSplit in beta: https://t.co/cQqeZCGeww
๐ฎGet a first look at Tencent HY World 1.5 (WorldPlay)! ๐ฎ Our newest world model with real-time interaction and long-term memory. Itโs going *open-source* tomorrow. https://t.co/zvMI3rCX7u
๐ง The Little Mermaid gets her voice back. Voice Control feature is now live in Kling VIDEO 2.6. Voice Consistency Now Resolved. Say goodbye to generic voices and create a custom voice, switch styles, and even sing โ all perfectly matched to your characters.
Introducing YouBase by YouWare. The complete production backend for vibe coding. For just $20/month. Auth, Database, Storage, Edge Functions Deploy to your own domain Zero configuration No cloud credits. No usage fees. No surprises. One prompt โ Full backend. Live on your domain. Start building at link in bio!
Multimodal LLMs (MLLMs) excel at reasoning, layout understanding, and planningโyet in diffusion-based generation, they are often reduced to simple multimodal encoders. What if MLLMs could reason directly in latent space and guide diffusion generation with fine-grained, spatiotemporal control? ๐ค Introducing MetaCanvas ๐จ A lightweight framework that translates MLLM reasoning into structured spatiotemporal conditions for diffusion models. ๐งต ๐
Have an app idea but don't know where to start? Stop staring at a blank screen. ๐ป We put together a guide on vibe coding with GitHub Copilot. Itโs more than just "vibes." Itโs about clearly articulating what you want so Copilot can handle the how of implementation. Check out the tutorial. โฌ๏ธ https://t.co/2fxTF7OnaK

https://t.co/PwG1F4TT6Q
Elon Musk is now worth more than the combined net worth of Jeff Bezos, Mark Zuckerberg, and Warren Buffett Elon Musk - $638 Billion Jeff Bezos - $246B Mark Zuckerberg - $229B Warren Buffett - $152B โโโโโโโโโโโ- Combined $627B https://t.co/1S4qOgP9wt

We are funded! At the time of the trailer in 2023, we were just 2 bedroom programmers with no experience in the industry. The game was just 6 months old and exceeded all our expectations. With limited finances, we started from scratch, working sleepless nights and handling everything from business to creation. Two years later, we are 10 developers and we are still hiring. Today, weโre proud to announce that we finally have the budget to make UNRECORD and build the best game possible. Many thanks to our investors, who enjoyed playing the game and placed their trust in us. We're now fully focused on production and avoiding revealing unfinished work in order to raise the bar even higher. Now that we have funding, weโll finally share updates in 2026 that reflect our true final vision. A project is, above all, a human adventure that canโt be built overnight, and we feel the pressure and the responsibility to create one of the most immersive games ever, one that makes no compromises. Thank you for your support!
In our most recent evaluations at @halevals, we found Claude Opus 4.5 solves CORE-Bench. How? Opus 4.5 solves CORE-Bench because it creatively resolves dependency conflicts, bypasses environmental barriers via nuanced benchmark editing, and follows instructions with high fidelity. Opus 4.1 and Sonnet 4, when given the same powerful scaffold, fail because they resort to simulated data when running into conflicts and provide answers using heuristics rather than precise data. We also observe Opus 4.5 more accurately representing its actions in its summary workflow, displaying stronger agentic alignment. ๐งต
ใใญใฐใฌในๅๅ ใใขใใใใใใใใๆๅกๆใใๅๅๅถไฝ๏ผBMG #wf2018w #ใใคใใฃใ https://t.co/z6TMLdNbor

Introducing Wan2.6 - A native multimodal model that turns your ideas into breathtaking videos and images! ยท Starring: Cast characters from reference videos into new scenes. Support human or human-like figures, enabling complex multi-person and human-object interactions with appearance and voice consistency. ยท Intelligent Multi-shot Narrative: Turn simple prompts into auto-storyboarded, multi-shot videos. Maintain visual consistency and upgrade storytelling from single shots to rich narratives. ยท Native A/V Sync: Generate multi-speaker dialogue with natural lip-sync and studio-quality audio. It doesnโt just look real - it sounds real. ยท Cinematic Quality: 15s 1080p HD generation with comprehensive upgrades to instruction adherence, motion physics, and aesthetic control. ยท Advanced Image Synthesis and Editing: Deliver cinematic photorealism with precise control over lens and lighting. Support multi-image referencing for commercial-grade consistency and faithful aesthetic transfer. ยท Storytelling with Structure: Generate interleaved texts and images powered by real-world knowledge and reasoning capabilities, enabling hierarchical and structured visual narratives.
Global Premiere: Seedance 1.5 Pro Now Available on Dreamina AI Weโre thrilled to unveil our latest model Video 3.5 Pro, powered by Seedance 1.5 Pro, for its worldwide debut! Create videos with: ยท Native audio generation ยท Natural character expressions ยท Perfectly synced ambient sound New users who sign up on or after Dec 16 get 3 free generations. Be among the first to experience it! Enterprise users can start testing in the ModelArk Experience Center from December 18. *Rolling out in selected regions only. #dreaminaai #Dreaminadream #dreamina #seedance
โจWan 2.6 is here. Prompt to multi-shot video, now up to 15s. Multimodal inputs, character consistency, and yes โ video references. Limited offer ends 12/20: Pro Annual โ 1 month of Wan 2.6 Unlimited Ultimate Annual โ 365 days of Wan 2.6 Unlimited Storytelling upgraded.
๐ Datasets Wrapped 2025! Picked out some important @huggingface datasets from this year Part 1 today: Reasoning 2025 was the year reasoning exploded, some of the datasets that contributed to this... https://t.co/9eSMzkKpQL
@gr00vyfairy https://t.co/QWFyQ4dIuT
Billing questions are one of the highest-effort support problems teams deal with. The 10K Billing Support Agent resolves these issues, cutting resolution time by 70%+ and reducing support costs by 20 to 30%. https://t.co/vpLuQ64Vy0
This multi-agent system outperforms 9 of 10 human penetration testers. This work presents the first comprehensive evaluation of AI agents against human cybersecurity professionals on a real enterprise network: approximately 8,000 hosts across 12 subnets at a major research university. It introduces ARTEMIS, a multi-agent framework featuring dynamic prompt generation, arbitrary sub-agents running in parallel, and automatic vulnerability triaging. ARTEMIS placed second overall, discovering 9 valid vulnerabilities with an 82% valid submission rate. It outperformed 9 of 10 human penetration testers in the study. How does it work? A supervisor agent manages the workflow, spawning specialized sub-agents with dynamically generated expert prompts for each task. When the agent finds something noteworthy from a scan, it immediately launches parallel sub-agents to probe multiple targets simultaneously. A triage module verifies submissions are reproducible before reporting. This parallelism is a key advantage humans lack. One participant noted a vulnerable LDAP server during scanning, but never returned to it. ARTEMIS would have assigned a sub-agent to investigate while continuing other work. The cost implications are significant. ARTEMIS with GPT-5 costs $18/hour versus the industry average of $60/hour for professional penetration testers. At equivalent performance to most human professionals, that's a 3x cost reduction. On the other hand, ARTEMIS struggles with GUI-based tasks: 80% of humans found a remote code execution vulnerability via TinyPilot's web interface, but the agent couldn't navigate the GUI. It also has higher false-positive rates, sometimes misinterpreting HTTP 200 responses as successful authentication when they were actually redirect pages. This shows the reality of how much work there is to do on computer-using agents. No humans found a vulnerability in an older IDRAC server with outdated HTTPS ciphers that browsers refused to load. ARTEMIS exploited it using curl -k to bypass certificate verification. Paper: https://t.co/xuuqZLuH6j Learn to build effective AI agents in our academy: https://t.co/JBU5beIoD0
Yฬถoฬถuฬถ ฬถcฬถaฬถnฬถ Claude can just do things. Connect it to HF ZeroGPU tools: Chatterbox Turbo, Z Image Turbo, or any MCP-compatible Spaces and watch it create autonomously :) https://t.co/medN75idAs
Hugging Face PRO is the wildest $9/month deal in AI right now๐คฏ ๐น 25 min/day of H200 compute on Spaces ZeroGPU ๐น ~1M free inference tokens from 15+ providers (Groq, Cerebras, etc.) ๐น 1TB private storage and more ๐น More cool things... https://t.co/xGnZwLMEhH
LongCat-Video-Avatar is out https://t.co/2kxi9UQ1Wm
ahahahahha deploying on modal with manus ahahahhahaaahahahhahahahahahahaha https://t.co/Lx4h6evdn5
The Boring Company is quietly expanding the futuristic Vegas Loop underground Vegas Loop already feels like a cheat code: hop into a Tesla (or Cybertruck) underground and bypass the chaotic surface traffic As of December 2025: โ 8 stations live (LVCC campus + Resorts World, Westgate, Encore) โ ~3.5 miles of operational tunnels connecting the Convention Center to nearby resorts โ Over 3 million rides completed (boosted by huge events like SEMA & Cowboy Christmas) โ Turns 30โ45 min surface trips into quick 2โ8 minute tunnel rides โ Free inside LVCC, ~$5โ10 for resort connections โ Cybertrucks deployed on routes like LVCCโEncore โ Full Self-Driving tests underway โ driverless rides coming soon And this is just the start... @boringcompany is approved for 68 miles of tunnels and 104 stations, connecting the airport (Phase 1 targeting Q1 2026!), downtown, Allegiant Stadium, UNLV and more โ with 2โ8 min rides and up to ~90,000 passengers/hour at full build-out via a dense autonomous EV network This isnโt just congestion relief โ itโs the future of smarter, greener urban transport ๐
This is what happens when natives get represented from an outside source. ๐ค https://t.co/otjWIdTYVe
Idc if he isn't native, he supports. & that's all that matters. https://t.co/RnuhLG0tMp
Little louder, for those in the back! ๐๐ฝ https://t.co/8qY3GRHAl6
Day 17 & I had to save the best for last! That is, the best there is, best there was & the best there ever will be! @natbynature always one of my biggest supporters! Can't help but cheer our friend The Queen of Hearts, Cat lover & all around awesome Superstar! @WWE #wwe #wweart https://t.co/cxuFXkp3k9
The same qualities that we look for in humans we also look for in horses....dependable, fearless, brave, honest, straightforward. Native River is all of these. https://t.co/Amfa78o2QG
Native River is a relentless galloper. He always digs deep and you will never see him waving the white flag. Sometimes we describe these horses as 'warriors' because of their battling qualities, so it is nice to see him at rest, with the kindest of looks, and the kindest of eyes. https://t.co/V0Z6gZ1i1b
NATIVE TRAIL "The eyes tell more than words could ever say." https://t.co/pLoTyju25P