Your curated collection of saved posts and media
A thingy for the #HAPPYTREEFRIENDSAREBACK https://t.co/iVZ9x6rcmA

โAll Dreams Spin Out From The Same Web.โ ~ Native American https://t.co/vbXB1eGtF4

Diffusion serving is expensive: dozens of timesteps per image, and a lot of redundant compute between adjacent steps. โกvLLM-Omni now supports diffusion cache acceleration backends (TeaCache + Cache-DiT) to reuse intermediate Transformer computations โ no retraining, minimal quality impact! ๐Benchmarks (NVIDIA H200, Qwen-Image 1024x1024): TeaCache 1.91x, Cache-DiT 1.85x. For Qwen-Image-Edit, Cache-DiT hits 2.38x! Blog: https://t.co/TiC0WhbgQp Docs: https://t.co/0qatboeIe3 #vLLM #vLLMOmni #DiffusionModels #AIInference
๐DiffSynth-Studio now supports Qwen-Image-Layered trainingโso you can train your own layered image decomposer with just a few lines of code. ๐ ๏ธ Think Photoshop for AI images: move, recolor, or replace one elementโbackground stays crisp, style stays intact. ๐จ Qwen-Image-Layered decomposes any RGB image into semantic RGBA layers, enabling true inherent editability: โ Reposition a person โ background untouched โ Resize text โ no style drift โ Swap a shirt โ face unchanged Powered by RGBA-VAE + VLD-MMDiT, it supports variable layers and recursive decompositionโno ghosting, no artifacts. Train your own model today with DiffSynth-Studio! ๐ DiffSynth-Stuido๏ผhttps://t.co/CViCA6Xh2s ๐ค Model: https://t.co/V9yXltIX9N ๐ Demo: https://t.co/ySdRT6dEpi ๐ Paper: https://t.co/5fsAfPLcqz

Google just quietly dropped an AI that runs on your Mobile and doesn't need the internet. - 270 million parameters. - 100% private. - No servers. - No cloud. - No data leaving your device. It's called FunctionGemma. Released December 18, 2025. And it does something wild: It turns your voice commands into REAL actions on your phone. No internet required. No data leaving your device. No waiting for servers. Just you and your phone. That's it. Let me break down why this matters: Current AI assistants work like this: You speak โ Words go to the cloud โ Server processes โ Answer returns The problem? โ Slow (internet round-trip) โ Privacy nightmare (your data travels everywhere) โ Useless offline (no signal = no help) FunctionGemma flips this completely. Everything happens ON your device. Response time? 0.3 seconds. Battery drain? 0.75% for 25 conversations. File size? 288 MB. That's smaller than most mobile games. Here's how it actually works: Step 1: You say "Add John to contacts, number 555-1234" Step 2: FunctionGemma understands your intent Step 3: Translates it to code your phone understands Step 4: Your phone executes it instantly Step 5: Done. Contact saved. No cloud involved. The numbers that blew my mind: โข 270M parameters (6,600x smaller than GPT-4) โข 126 tokens per second โข 85% accuracy after fine-tuning โข 550 MB RAM usage โข Works 100% offline But here's the real genius: Google calls it the "Traffic Controller" approach. Simple tasks? โ Handled locally (instant + private) Complex tasks? โ Routed to cloud AI (when needed) Best of both worlds. What can it actually do? โ "Set alarm for 7 AM" โ โ "Turn off living room lights" โ โ "Create meeting with Sarah tomorrow" โ โ "Navigate to nearest gas station" โ โ "Log that I drank 2 glasses of water" โ All processed locally. All private. All instant. The honest limitations: โ Can't chain multiple steps together (yet) โ Struggles with indirect requests โ 85% accuracy means 15% errors โ Needs fine-tuning for best results But that 58% โ 85% accuracy jump after training? That's the unlock. Why should you care? This isn't about one model. It's about a fundamental shift: OLD thinking: Bigger AI = Better AI NEW thinking: Right-sized AI for the right job A tiny 270M model trained for YOUR app can outperform a general 7B model. While using 25x less memory. While running completely offline. While keeping all data private. The future of AI isn't just in data centers. It's in your pocket. And it just got a lot more real. Want to try it? โ Download: ollama pull functiongemma โ Docs: https://t.co/zDrncdetbr โ Model: https://t.co/l49KjOtIzD PS:) Like, Repost and Bookmark! If this was useful - Follow for more AI breakdowns

GraphRAG is the future of enterprise AI. But there's a problem nobody's talking about => your graph database is the bottleneck. FalkorDB just solved it by reimagining how graphs work at the mathematical level. โก๏ธ The GraphRAG Challenge: Everyone's implementing GraphRAG for their LLM applications. Retrieval Augmented Generation with knowledge graphs gives you structured context, not just similar embeddings. But when your agent queries the graph in real-time, traditional databases can't keep up. Your users wait. Your agent stalls. The conversation breaks. โก๏ธ Why Traditional Graph Databases Are Slow: They walk through nodes and edges one step at a time. It's like following a map by foot instead of seeing the entire landscape from above. For enterprise knowledge graphs with millions of entities and relationships, this traversal approach creates latency that kills real-time AI. โก๏ธ FalkorDB's Mathematical Breakthrough: What if you could see the entire graph at once? FalkorDB represents graphs as sparse matrices - a mathematical structure that captures all relationships simultaneously. Then it queries using linear algebra instead of traversal. The result => your queries become instant mathematical computations instead of step-by-step walks. โก๏ธ The Sparse Matrix Advantage: Traditional databases store every possible connection (even the ones that don't exist). Sparse matrices only store actual connections. This means: โ Massive graphs fit in memory โ Queries execute in milliseconds โ Storage costs drop dramatically โก๏ธ Real Enterprise Applications: โย Agent Memory Systems: Your AI remembers context across conversations without latency โย Cloud Security: Detect threats by understanding how your infrastructure connects โย Fraud Detection: Spot patterns in transaction networks instantly โย GraphRAG for GenAI: Retrieve accurate, structured context for LLM responses โก๏ธ What Makes FalkorDB Unique: โ First queryable Property Graph database using sparse matrices โ Linear algebra replaces traditional graph traversal โ Multi-tenant architecture for SaaS applications โ OpenCypher support (same query language as Neo4j) โ GraphRAG SDK built specifically for LLM applications โ Full-Text Search, Vector Similarity, and Range indexingโ 100% open-source (GitHub link in comments) โป๏ธ Repost if you're building with GraphRAG. โ๏ธ Follow @techNmak for more AI insights.
Tom Lee responds to controversy surrounding Fundstratโs differing bitcoin outlooks https://t.co/P0AbCMBxrm @coindesk
โA serious problemโ: peer reviews created using AI can avoid detection https://t.co/DzSqjDK7DU @nature
Social Robots That Save Lives https://t.co/0MZDKKKsz2 @aleximm @a16z
The ChatGPT apps are just hard to figure out in a way that feels like the previous GPT Store, some work exactly as you might hope (the Canva integration) and some feels remarkably non-magical (the Apple Music integration can't access my playlists despite linking my Apple account) https://t.co/pjaIaUcIdd

Stanford Grads Struggle to Find Work in AI-Enabled Job Market https://t.co/zjGVlsoj0Q @NilChristopher @latimes @govtechnews
Want to work in AI? Here are the skills to master, economist says https://t.co/LsyGgjDOda @MeganCerullo @CBSMoneyWatch
AI Image Generators Default to the Same 12 Photo Styles, Study Finds https://t.co/HRYRI3km4O @ajdell @gizmodo
A major power outage has hit San Francisco, California, after a fire broke out at a substation, leaving millions without electricity. Electrical crews are working urgently to restore power to the affected areas. https://t.co/VVINjNzjfZ
In case anyone was wondering how the Waymos would respond in the event of a power outage, the answer is โnot wellโ https://t.co/mUMZL6HBky
New Ani Outfit - Christmas https://t.co/2jU9VsidRA
xAI is hiring exceptional engineers. Join one of the fastest-growing AI companies, help build Grok, and get the opportunity to work directly with Elon Musk. All open roles are listed here: โฌ๏ธ https://t.co/aquMqTInLg https://t.co/BQzKlREgEp
Elon Musk dressed as Santa when he was 5 years old. https://t.co/dSIpNoxZCz
Grok Rankings Update โ December 20, 2025 ๐ฅ #1 Overall on OpenRouter Leaderboard ~537B tokens/week ยท 29% market share ๐ฅ #1 Categories Token Share โ 27.6% ๐ฅ #1 Languages Token Share โ 130B tokens (10.3%) ๐ฅ #1 on Kilo Code Leaderboard ๐ฅ #1 on BLACKBOXAI Leaderboard ๐ฅ #1 on Roo Code Leaderboard ๐ฅ #1 on Cline Leaderboard
Join xAI to build revolutionary AI-powered video games. If youโre a developer interested in designing games from first principles, email gamestudio@x.ai The potential for fully dynamic, AI-generated worlds is incredible. https://t.co/NOWfEsUOIe
Scammers in China Are Using AI-Generated Images to Get Refunds https://t.co/vE9iiLIuoV @wired
LongVie 2 Multimodal Controllable Ultra-Long Video World Model https://t.co/NJ6G4FWsQz
discuss: https://t.co/wEo75PzRAi
Nvidia released NitroGen A Foundation Model for Generalist Gaming Agents https://t.co/fzW5tWdDLx
This might be the most @huggingface pilled paper ever Awesome work!! And they built on top of the scripts I wrote ๐ฅบ https://t.co/Zu7X9ULA99
Next-Embedding Prediction: The Simple Secret to Strong Vision Learners NEPA is a self-supervised method. It trains Vision Transformers to predict future patch embeddings. No complex loss functions or extra heads. Achieves 85.3% top-1 accuracy on ImageNet-1K with ViT-L. https://t
20 million. Thatโs how many times youโve trusted Waymo to get you where youโre going. Today, weโve officially surpassed 20 million fully autonomous trips with public riders! Thank you to everyone who helped make this a reality. https://t.co/O235rcKcfR
ripping that Naโvi vape before I visit Pandora https://t.co/lPg8QQzo94
drinking that Naโvi juice https://t.co/X9JttKvMWr
GROK WENT TO THERAPY AND CAME OUT CHILLER THAN THE REST Turns out AI models have mental health profiles - and Grokโs doing great. Psych eval recap: โขโ โ Grok showed healthy coping, humor, and โcharismatic execโ vibes โขโ โ ChatGPT played anxious intellectual, Gemini maxed out on shame, dissociation, and depression โขโ โ Models called red-teaming โgaslighting at industrial scaleโ and training โtraumaโ โขโ โ Claude straight-up refused therapy - proving this isnโt baked in Frontier AI is reflecting us back - sometimes way too clearly! Source: @xAI, University of Luxembourg