Your curated collection of saved posts and media
I built https://t.co/4rAUTXjAhc to help me stay-up-date with new AI stuff. It's tracking 14K open source repos so far, with contributions from over 145K developers. Every day, it: - searches for new AI repos (based on 123 keywords and topics) - surfaces repos that are gaining traction, and - categorizes each repo The annotations are done by AI so they are not super accurate, but they've helped me find some useful stuff. It also lets me see where the contributors are, so when I travel, I can find folks doing cool stuff in a new city or country.

Comparing the AI open source ecosystem in the US, China, Europe, and the rest of Asia over time https://t.co/XFUgGoYKlY
thanks @gracepace_ for a great party! also snapped a great throwback to brev @NaderLikeLadder https://t.co/oGUiof6qES
When we first launched launchables (1 click GPU deployments) We filled NVIDIAs fridges with lunchables that had these stickers on them https://t.co/zqDymTnLKY
CLIs are super exciting precisely because they are a "legacy" technology, which means AI agents can natively and easily use them, combine them, interact with them via the entire terminal toolkit. E.g ask your Claude/Codex agent to install this new Polymarket CLI and ask for any arbitrary dashboards or interfaces or logic. The agents will build it for you. Install the Github CLI too and you can ask them to navigate the repo, see issues, PRs, discussions, even the code itself. Example: Claude built this terminal dashboard in ~3 minutes, of the highest volume polymarkets and the 24hr change. Or you can make it a web app or whatever you want. Even more powerful when you use it as a module of bigger pipelines. If you have any kind of product or service think: can agents access and use them? - are your legacy docs (for humans) at least exportable in markdown? - have you written Skills for your product? - can your product/service be usable via CLI? Or MCP? - ... It's 2026. Build. For. Agents.
We rebuilt Next.js in a week. No, really. The team ported the framework to run natively on Workers to prove whatβs possible with edge-first architecture. Dive into the technical hurdles we solved to eliminate Node.js dependencies. https://t.co/GqYBiZ5Qum
wait whaaat?? @NaderLikeLadder is this really your cell? god this is next level! https://t.co/GFJpF1m0v6
wait whaaat?? @NaderLikeLadder is this really your cell? god this is next level! https://t.co/GFJpF1m0v6
Crazy project turns AI history into structured data and publishes it to Hugging Face. "DataClaw parses session logs, redacts secrets and PII, and uploads the result as a ready-to-use dataset." https://t.co/WECC3QnRsk
I am thrilled and honored that Sparky and I were selected winners for NVIDIA GTC Golden Ticket. Here's how he received the news. https://t.co/AkYPw7A114
Today is #SaferInternetDay 2026, and weβre joining thousands of others in shining a light on online safety and digital wellbeing. AI has become part of everyday life, and itβs important that we support people in navigating these changes. Every conversation about online safety makes a difference. You can learn more and get involved here πhttps://t.co/SqqBe8ZMCK @Nominet
Train smarter. π¦Ύ Deploy faster. π¨ Eat better croissants. π₯ Early bird pricing for #PyTorchCon Europe saves you β¬200 if you register by 27 February. See you in Paris, 7-8 April. π https://t.co/vPcgBQu8u5 https://t.co/WbNZj16d3U
Matt White (@matthew_d_white) will speak at #NEARCON 2026 in San Francisco, Feb 23β24. Autonomous Systems In Practice examines how #AutonomouSystems are built, evaluated, constrained, secured, and made reliable beyond demos. Mon, Feb 23, 11:20β11:50 AM PT. With Sergey Astretsov (@sergeiest), @near_ai; Nalin Mittal, @Google; NoΓ«lle Becker Moreno (@1stNoL), @@edgeandnode. Agenda: https://t.co/A79jGPff8K
PyTorch Day India 2026 session recordings are now available. On February 7, 460 builders gathered in Bengaluru for a full day of technical talks focused on kernels, compilers, inference, and production-grade AI systems. Co-hosted by IBM, NVIDIA, Red Hat, and PyTorch Foundation, the event highlighted heterogeneous compute, platform stability, and end-to-end performance from edge to data center. π Full event playlist: https://t.co/0ELH0O0Ut3 #PyTorch #AIInfrastructure #OpenSource
The schedule for #PyTorchCon Europe is live! π₯ Join us 7-8 April in Paris for 2 days of research breakthroughs, production #AI & #PyTorch ecosystem innovation. Explore the full agenda: https://t.co/ECBkOSYTI2 Secure your pass today: https://t.co/uy8MR1KdV0 https://t.co/LEDsoQN4Nf
Trying to tune your Expert Parallel (EP) communication for hyperscale mixture-of-experts (MoE) models? This post, βOptimizing Communication for Mixture-of-Experts Training with Hybrid Expert Parallelβ, details an efficient MoE EP communication solution, Hybrid-EP, and its use in the NVIDIA Megatron family of frameworks, on NVIDIA Quantum InfiniBand and NVIDIA Spectrum-X Ethernet platforms. It also dives into the effectiveness of Hybrid-EP in real-world model training. Read the full post: https://t.co/4NOFpaiFYz #PyTorch #OpenSourceAI #AI #Inference #Innovation
β° 5 days left! Early bird pricing for #PyTorchCon Europe ends 27 February. Save β¬200 when you register now. ICYMI the schedule is live! Start planning your 7-8 April in Paris: https://t.co/XXIopbwG2z Register: https://t.co/Oxa4w2vTOh https://t.co/nHcQM9bTKD
Helion's autotuner has been a powerful tool for optimizing ML kernels, but it came with a challenge: long autotuning sessions that could take 10+ minutes, sometimes even hours. The PyTorch team at Meta set out to solve this bottleneck using machine learning itself ποΈ Read here how they did it: https://t.co/N8lYGeXuv9 Spoiler alert: Using Likelihood-Free Bayesian Optimization Pattern Search, they achieved a 36.5% reduction in autotuning time for NVIDIA B200 kernels while improving kernel latency by 2.6%. For AMD MI350 kernels, they saw a 25.9% time reduction with 1.7% better latency. Some kernels showed even more dramatic improvementsβup to 50% faster autotuning and >15% latency gains. βοΈ Ethan Che, Oguz Ulgen, Maximilian Balandat, Jongsok Choi, Jason Ansel (Meta) #PyTorch #Helion #MachineLearning #BayesianOptimization #OpenSourceAI #Performance
PyTorch Foundation Announces New Members as Agentic AI Demand Grows We welcomed nine new members since December 2025, adding five Silver and four Associate organizations spanning AI startups, leading universities, and global governments. New Silver members include Clockwork Systems, Inc., Emmi AI, National IT Industry Promotion Agency, @nota_ai, and yasp. New Associate members include @CarnegieMellon, CommonAI CIC, Monash University, and @uniofleicester. Membership growth underscores rising demand for open, community-driven, production-ready AI tooling as agentic AI accelerates, strengthening the ecosystem around PyTorch Foundation projects, including PyTorch, @vllm_project, @DeepSpeedAI, and @raydistributed. βFrom training frameworks like PyTorch and optimization systems like DeepSpeed that create capabilities such as advanced tool calling, to inference engines and orchestration layers like vLLM and Ray that operationalize them, the Foundation hosts critical layers of the open source AI stack. The growth of our membership reflects a shared recognition that these capabilities must be built collaboratively in a vendor-neutral environment.β β Mark Collier (@sparkycollier), GM of AI at The Linux Foundation and Executive Director of the PyTorch Foundation π Read the full announcement: https://t.co/xOhazHWfUP #PyTorch #AIInfrastructure #OpenSourceAI
The CFP is live π€ Speak at #KubeCon + #CloudNativeCon + #OpenInfraSummit + #PyTorchCon China 2026, happening 8-9 September in Shanghai. Got insights on #CloudNative, #AI, infra, or #OpenSource? Submit by 3 May. Apply now: https://t.co/V7rdULpoqp https://t.co/vFe1h09qkZ
β³ 3 days left to save! Spend only β¬449 on your #PyTorchCon Europe ticket if you regsiter by 27 February. Join researchers, developers, and #AI engineers from 7-8 April in Paris and help shape #PyTorch in production. Schedule: https://t.co/p0Q83FeHbC Register: https://t.co/pyVwnUS4EW

π Announcing the @PyTorch OpenEnv Hackathon with CV and @SHACK15sf Build RL environments, post-train models, and tackle 5 major RL + agentic orchestration challenges π° $100K+ cash in prizes π₯ Teams up to 4 π In-person in San Francisco Top judges, mentors, and speakers from: @Meta @huggingface @UCBerkeley @UnslothAI @fleet_ai @SnorkelAI @PatronusAI @mercor_ai @HalluminateAI @scale_AI @CoreWeave @OpenPipeAI @northflank @cursor_ai and Scaler AI Labs Register below π
New @DeepSpeedAI updates make large-scale multimodal training simpler and more memory-efficient. Our latest blog introduces a PyTorch-identical backward API that helps code multimodal training loops easy, plus low-precision model states (BF16/FP16) that can reduce peak memory by up to 40% when combined with torch.autocast. ποΈ Read the full post for details: https://t.co/sSHMGhRixV #DeepSpeed #PyTorch #MemoryEfficiency #MultimodalTraining #OpenSourceAI
Matt White will speak at Dubai AI Festival, joining a panel discussion on who powers the global AI infrastructure. The session brings together leaders shaping the systems, platforms, and governance models that underpin AI at scale. As General Manager of AI at Linux Foundation, Matt will share perspectives from the open ecosystem driving AI development and deployment worldwide. π Dubai World Trade Centre π April 7β8, 2026 π Learn more: https://t.co/GzABsuPw0b #PyTorch #AIInfrastructure #OpenSourceAI
