Your curated collection of saved posts and media
Introducing TimeFusion, our new multimodal foundation model that unlocks a unified language between humans and sensors. For decades, organizations had access to billions of sensor signals across industry, infrastructure, energy systems, and personal devices but almost none of that data has been easy to understand or act upon directly. TimeFusion changes that. That means you can *ask* a machine about vibration anomalies, *generate* new signals from a text description, or *forecast* what comes next β all in plain English. Here is how it works: πΉ TimeFusion is a general sensorβlanguage fusion model: a 2-billion-parameter transformer trained to ingest and produce both natural language and raw time-series signals in a single continuous framework. πΉΒ Unlike previous approaches that compress sensor data into narrow text-only formats, TimeFusion uses Universal Tokens to combine time-series signals and language inside one shared vocabulary. This enables the model to truly understand physical data instead of translating it through various hacks β and to perform forecasting, anomaly detection, filtering, imputation, captioning, QA, and generation through one unified interface. πΉΒ TimeFusion outperforms much larger models like GPT-5, Claude Sonnet, GLM 4.6 and others on sensor-related tasks despite being orders of magnitude smaller. And the model isnβt just translating signals into text. It can already do powerful text-to-signal transformations: forecasting the future of a waveform, reconstructing missing data, filtering noise, or reshaping a signal based on a natural-language prompt producing new signals rather then text as output. This opens the door to an entirely new category of interfaces with the physical world β where engineers, operators, doctors, city systems, and even consumers can converse with the machines and environments around them, instead of digging through raw numbers and graphs. A new way to talk to the physical world is here π #PhysicalAI
Haha got roasted by ChatGPT https://t.co/GPxVThtBjb

Stop repeating your instructions. π Copilot now supports Agent Skills, the open standard by @AnthropicAI. Package your specialized expertise into skills once and use them everywhere. Try it in: β agent mode in @code Insiders β Copilot coding agent β Copilot CLI Here's how to get started. π https://t.co/AY0EXNanwC
2025 was a busy year. From model launches to Labs experiments to scientific breakthroughs, it can be tough to keep up with all of the new ways AI can help make your life easier. So, here are 3 of our favorite helpful AI tips (and you can find 37 more in the link below): 1. Make an interactive simulation with Gemini 3: If you're researching something complicated like mortgage loans, Gemini 3 in AI Mode can make you a custom-built interactive loan calculator, so you can compare two different options and see which offers the most long-term savings. 2. Find your style with a selfie: Upload a selfie into our virtual try on tool. Nano Banana will generate a full body digital version of you to virtually try on the latest styles. 3. Get homework help with Guided Learning: Guided Learning in the @GeminiApp an interactive study partner that helps you build a deeper understanding of any topic. It can generate study guides from uploaded course material, walk you through debugging problematic code or simply explain topics with helpful videos and images. Check out the rest, and share your own tips with us! β¬οΈ https://t.co/o94gbstesR
I wrote a post for @Variety about the complexities of #Native identity. Itβs not an easy subject & would benefit from more listening and understanding. #sacheenlittlefeather #NativeTwitter https://t.co/5PsUDHKVrE
γε₯½θ©ε注δΈγ γιε€ͺιγγγγ€γ©γΉγ γSAMURAI -ιΆ-γ εεεΆδ½οΌγγ΅γ https://t.co/ot6bED7Akr #γγγγγΌγγΉ #γ¨γγγ https://t.co/pn91oWUrNW

πIt's Day 8 of Pokeeβs 12 Days of Agents! Keeping up with X trends usually means scrolling for hours and trying to turn news into memes or graphics before it gets old. Todayβs agent: Trending Topic β Meme β X Post It: π§ Finds trending topics on X π° Pulls accurate headlines & sources π€£ Generates a meme (or infographic if you'd rather) π€ Posts it to your X account Details + Day 8's giveaway below! π
AI saves lives. This plane's AI systems successfully detected the pilot was incapacitated, communicated the situation with air traffic control, and safely landed the plane. A sci-fi-esque miracle. https://t.co/EMMPgfsN6X
Claude browser automation works really well! https://t.co/d5GNFdchiI

Fuck it itβs Christmas https://t.co/Vlcfk1pHL1
Fuck it itβs Christmas https://t.co/Vlcfk1pHL1

itβs not christmas in l.a. until youβve wanted to kys at the grove https://t.co/uUhnFQ1Nm8
itβs not christmas in l.a. until youβve wanted to kys at the grove https://t.co/uUhnFQ1Nm8

@JasonSCampbell βthere isn't much Native American culture in American cultureβ EXCEPT for the LAND that was stolen from them and the stolen Black slave labor used to βbirth a nationβ from nothing. βWeβ GTFO @RickSantorum is a GQP jackass. https://t.co/1IvSfEwlgq

[HERA] ν€λΌXμ λ, NEW μΌμμΌ νμ°λ λ§€νΈ λ¦½μ€ν± https://t.co/5nn7YAFtz6 #JENNIE https://t.co/2B1O5p85id

If you like native american people's Say... βYESβ πΊπππΊπ² https://t.co/Jay4c6iwmU

A Ukrainian company is currently developing βAmbush dronesβ which wait for targets perched in trees, under the cover of leafs. πΊπ¦ https://t.co/Am8IacaVD2
We benchmarked several open-weight Chinese models on FrontierMath. Their top scores on Tiers 1-3 lag the overall frontier by about seven months. https://t.co/1WmvqzzHG0
We're releasing Medmarks v0.1, the largest completely open-source automated evaluation suite for assessing the medical capabilities of LLMs! Developed in our @MedARC_AI community, w/ support from @PrimeIntellect So far weβve explored 46 models to figure out the best! https://t.co/Hfrwm12cnW
We have a holiday surprise for y'all! Introducing Medmarks v0.1! At Sophont, we're interested in pushing forward the medical capabilities of LLMs but we realized open benchmarking is still quite lacking. So we created an evaluation suite! We spent the past 3 months working with our @MedARC_AI research community and @PrimeIntellect to build the Medmarks leaderboard. We hope you find it interesting!
We're releasing Medmarks v0.1, the largest completely open-source automated evaluation suite for assessing the medical capabilities of LLMs! Developed in our @MedARC_AI community, w/ support from @PrimeIntellect So far weβve explored 46 models to figure out the best! https://t.
If you're an LLM researcher, or clinician, or model developer, and any of this sounds interesting to you, please join our Discord server @MedARC_AI and contact us!! https://t.co/kVCf49Fgiq
Got two GPUs and two SFT runs at the same time with @PrimeIntellect Idea is to fix steps while varying number of examples and then test against a held out test set to see how input diversity helps generalise for a simple environment verifiers here i come~ https://t.co/QxyLpJ6Rst

Your Year with ChatGPT! Now rolling out to everyone in the US, UK, Canada, New Zealand, and Australia who have reference saved memory and reference chat history turned on. Just make sure your app is updated. https://t.co/whVkS1qxKu

If you're in one of the countries above, check back throughout the day to see if "Your Year with ChatGPT" has rolled out to you. You can also try adding the "Your Year with ChatGPT" app by tapping the + sign and asking Chat, "show me my year with ChatGPT." https://t.co/oRUNQiCDzn

What a disturbing paper. Just like humans, LLMs can lose their thinking ability from consuming junk content. By feeding X-like viral tweets to top models, researchers triggered lasting cognitive decay. Retraining on junk tweets caused: -23% drop in reasoning (ARC Challenge) -38% drop in context understanding (RULER) Increased narcissism, reduced agreeableness Models skipped reasoning steps entirely All models tested (LLaMA-3, Qwen, Mistral) showed permanent degradation Partial detox with clean data failed to restore baseline
What are people missing about the dancing robot video from @UnitreeRobotics? Each robot is taught by humans. They didn't come up with the dance all by themselves. Maybe they will someday, but even the Chinese tell me generalized robots run by AGI are years away. Someone jumped into a motion capture system (OK, maybe just a camera) and did a dance. Recorded it. There are teams of engineers (humans again) working to make it all awesome and building the AIs that let it learn, and execute, the dance. And other teams who designed, engineered, and built the robots, along with the many parts that went into it. And built the factory that made each, and who work making each piece and assembling it. Then other teams that marketed it (which really is what the dancing is all about). And other people who packaged it, shipped it. Yet more humans who updated it. And more, still, who built the AI infrastructure and wrote the code (or at minimum prompted it). Thousands of people involved in making a robot dance. And when I watch the video? They did their job too perfectly. The humans dancing behind them are more interesting to watch. Why? They are imperfect and beautiful. It's why I'm not worried about the future of jobs. Each robot made will create many jobs. High paying jobs. Yeah, you might need to learn something new to get one of those jobs, but they won't be automated in 2026. That said, jobs are changing. I see it on X. So many new jobs get announced every week here. But they aren't the old kinds of jobs. I saw it at the autonomous car races in Abu Dhabi. Each car was driven by a computer. But behind the computer was thousands of jobs. Here's the German team that beat the human on the race track. Last year the human was 30% faster, this year the AI passed them. Trained by this team. Robots are the ultimate expression of humanity. Yet armies of humans hate them. Aren't humans funny?
GLM 4.7 is now available in anycoder https://t.co/PI84Mj87ag
https://t.co/esPDyHE1YC
Jarvis is can speak! π Iβm running Chatterbox-Turbo from @resembleai on my Mac using MLX-Audio as a server Now Iβm gonna refine it and share it later π₯ PS: donβt mind my voice it just came back 2 days ago π Repo: https://t.co/STF50gFoWW https://t.co/OjmymqTGTx
Reachy Mini aka Jarvis is alive! The instructions manual is very easy to follow and the native App is awesome. On to the next phase π Unboxing and build video comingβ¦ https://t.co/ilPvUJIc7Z
Yes, this app just crossed 1 million ZeroGPU runs. ππ€ https://t.co/hYiqjyHDNj https://t.co/2agUjNLwWg
Here it's using https://t.co/KLEn2pJRQ6 from @prithivMLmods powered by @Alibaba_Qwen LoRAs but you can add any Hugging Face compatible Spaces π€― https://t.co/YMTaRwpKhA

Yes, this app just crossed 1 million ZeroGPU runs. ππ€ https://t.co/hYiqjyHDNj https://t.co/2agUjNLwWg

ah yes @huggingface the activewear company β’οΈ https://t.co/DtzM1uXBCX