Your curated collection of saved posts and media

Showing 32 posts · last 7 days · newest first
R
reggitales
@reggitales
📅
Dec 11, 2025
111d ago
🆔42826448

You have 100+ tabs open and your brain is fried. Introducing Dex, your second brain in Chrome that organizes, remembers, and takes action for you. Turn tabs into to-dos, multitask with agents, find and save anything for later. All without leaving your tab. As a founder, it's already saved me hundreds of hours. Comment for 1M free tokens - joindex [dot] com

🖼️ Media
L
liuzhuang1234
@liuzhuang1234
📅
Dec 12, 2025
110d ago
🆔41497355

Stronger Normalization-Free Transformers – new paper. We introduce Derf (Dynamic erf), a simple point-wise layer that lets norm-free Transformers not only work, but actually outperform their normalized counterparts. https://t.co/NAPJvfsEGI

Media 1
🖼️ Media
M
michahu8
@michahu8
📅
Feb 27, 2025
398d ago
🆔57967454

Training on a little 🤏 formal language BEFORE natural language can make pretraining more efficient! How and why does this work? The answer lies…Between Circuits and Chomsky. 🧵1/6👇 https://t.co/xXlBlrfSls

Media 1
🖼️ Media
J
jxnlco
@jxnlco
📅
Dec 13, 2025
109d ago
🆔31707810

Thought for 1hr https://t.co/EDdRc0iBxH

Media 1
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 13, 2025
109d ago
🆔97350226

Omni-Attribute Open-vocabulary Attribute Encoder for Visual Concept Personalization https://t.co/OzlsDZOyLz

🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 13, 2025
109d ago
🆔36939344

discuss: https://t.co/A6t6mDJjKD

Media 1
🖼️ Media
G
github
@github
📅
Dec 13, 2025
109d ago
🆔07797397

It started as a small idea to connect AI models to developer workflows. It turned into one of the fastest-growing open standards in the industry. 🚀 Now, the Model Context Protocol is officially joining the @linuxfoundation. Hear from the engineers and maintainers of GitHub, @Microsoft, @AnthropicAI, and @OpenAI on the journey from day zero to now. 👇 https://t.co/CAUhhcUugB

Media 2
🖼️ Media
M
MarioNawfal
@MarioNawfal
📅
Dec 13, 2025
109d ago
🆔03834307

NEURALINK: THE BRAIN CHIP GAME FIRED ITS SUPPLY CHAIN Everyone else is stuck waiting for parts. Neuralink built the whole factory instead. The Takeover: •⁠ ⁠Chips, implants, robots, surgery tools built in-house •⁠ ⁠Labs, imaging, animal care all under one roof •⁠ ⁠Even the HQ is custom-built for speed No middlemen. No bottlenecks. Just pure vertical momentum! Source: @neuralink

🖼️ Media
P
pete_bauman
@pete_bauman
📅
Dec 10, 2025
112d ago
🆔18800920

@SawyerHackett Government policy driving ethnic cleansing has been wrong. I want to repair the hateful damage caused by genocidal anti-Whites. Without our own lands we cannot endure. A homeland for every race is the moral path. https://t.co/EpJhS4T7VG

Media 1Media 2
🖼️ Media
B
bschaeffer123
@bschaeffer123
📅
Dec 12, 2025
110d ago
🆔43246916

@peterdiver69 @brettachapman Are you REALLY going to lecture the rest of us about how to treat indigenous people? Really mate? https://t.co/kL0dVsEx64

Media 1
🖼️ Media
O
omarsar0
@omarsar0
📅
Dec 13, 2025
109d ago
🆔20100336

Weak-to-Strong GraphRAG Interesting ICLR 2026 submission with some insights on improving GraphRAG systems and making them more feasible in production environments. Graph-based RAG lets LLMs ground responses in structured knowledge graphs. But there's a fundamental mismatch between retrievers and the LLMs they serve. As knowledge graphs become central to RAG systems, aligning retrievers to LLM needs through LLM feedback offers a principled path to better multi-hop reasoning with lower costs. The problem is twofold. First, graph retrievers train on weak supervision like query-answer shortest paths. This misses key reasoning steps and introduces spurious connections. Second, retrieved knowledge comes back unorganized. LLMs are sensitive to context ordering, and messy graph data adds unnecessary complexity. This new research introduces ReG (Refined Graph-based RAG), a framework that uses LLM feedback to align weak retrievers with the LLMs they serve. Graph-based RAG is essentially a black-box combinatorial search. Given a query, find the minimal sufficient subgraph for correct reasoning. The LLM acts as an evaluator. But exhaustively searching this space is computationally intractable. ReG takes a simpler approach. Instead of optimizing over all possible subgraphs, it utilizes LLMs to select more effective reasoning chains from candidate chains extracted from the knowledge graph. The improved supervision trains better retrievers. A structure-aware reorganization module then refactors retrieval results into logically coherent evidence chains. This aligns the presentation to how LLMs actually process information. On CWQ-Sub with GPT-4o, ReG achieves 68.91% Macro-F1 versus SubgraphRAG's 66.48%. On WebQSP-Sub, 80.08% versus 79.4%. The gains hold across multiple LLM backbones. The data efficiency is notable in the reported experimental results. ReG trained on just 5% of data, matches baselines trained on 80%. The refined supervision eliminates noise that larger datasets would otherwise compound. When paired with reasoning LLMs like QwQ-32B, ReG reduces reasoning tokens by up to 30% while improving performance. The structure-aware reorganization prevents the "overthinking" problem where LRMs produce verbose traces in a noisy context. Paper: https://t.co/mF9sLB63JN

Media 1
🖼️ Media
M
mervenoyann
@mervenoyann
📅
Dec 13, 2025
109d ago
🆔45140934

my @huggingface 2025 wrapped has arrived and it's so surprising 😄 thanks everyone for all the love you showed to my repositories 💛 https://t.co/IMBdAJ9YFV

Media 1
🖼️ Media
N
nalkalc
@nalkalc
📅
Nov 01, 2016
3438d ago
🆔61599744

New neural net for Language and Machine Translation! Fast and simple way of capturing very long range dependencies https://t.co/0gSoVVGrYd https://t.co/cWANbRTAMQ

Media 1Media 2
🖼️ Media
D
DeepLearn007
@DeepLearn007
📅
Sep 15, 2017
3120d ago
🆔33234944

Primer on Neural Networks for Natural Language Processing #AI #MachineLearning #DeepLearning #ML #DL #nlp #tech https://t.co/rStdho8pTx https://t.co/TRtvz91SpT

Media 1Media 2
🖼️ Media
N
N8SunFGC
@N8SunFGC
📅
Aug 19, 2020
2050d ago
🆔33487877

@DeepLeffen i am only mildly disturbed by the sheer accuracy the model has managed to pick up on these references https://t.co/OM2Xijwr6F

Media 1
🖼️ Media
S
svpino
@svpino
📅
Oct 03, 2020
2006d ago
🆔73767681

I dislike this so much... Unfortunately, it’s very common amount those tweets recommended by Twitter as belonging to “Machine Learning” category. How can you convey any useful information when 90% of your tweet is just spamming hashtags? https://t.co/1Obp25LJTA

Media 1
🖼️ Media
T
TeachTheMachine
@TeachTheMachine
📅
Feb 19, 2022
1502d ago
🆔69048587

Naive Bayes Classifier From Scratch in Python https://t.co/BxR2urXisa https://t.co/i6pIxS5wMe

Media 1
🖼️ Media
D
ds_bun_
@ds_bun_
📅
May 29, 2022
1403d ago
🆔11022848

Complete Machine Learning pipeline for NLP tasks https://t.co/Bz7Qrde59L

Media 1
🖼️ Media
D
ds_bun_
@ds_bun_
📅
Apr 01, 2023
1095d ago
🆔77661953

Complete Machine Learning pipeline for NLP tasks https://t.co/Bz7QrdwenT

Media 1
🖼️ Media
S
Sanemavcil
@Sanemavcil
📅
Dec 13, 2025
109d ago
🆔46828817

🚀 Quantum Computing: The end of waiting? 🧠⚛️ Classical computers think in bits: 0 OR 1 🪙 Quantum computers use qubits: 0 AND 1 (kind of like a spinning coin) 🌀🪙 So instead of “checking one straw at a time” 🐢⏳, quantum machines can explore many possibilities in parallel—then use interference to amplify the best answers ✨📈 (Not magic, not for everything… but game-changing for specific problems.) 🌍 Why it matters (the big idea): 🧬 Drug discovery → simulate molecules faster, discover new medicines 🧱 New materials → better batteries, superconductors, catalysts 🌡️ Climate & energy → improved chemistry + materials for cleaner tech 🛰️ Optimization → logistics, routing, scheduling, supply chains 🔐 Cryptography → new security era (post-quantum world) We’re still early—today’s devices are noisy 🧊🔧—but the direction is clear: We’re not just making computers faster… we’re changing how we compute. ⚡🧠 What’s your take—breakthrough of the decade or “still too early”? 👇🔥 #QuantumComputing #Quantum #Qubits #DeepTech #FutureTech

Media 1
🖼️ Media
D
dair_ai
@dair_ai
📅
Dec 13, 2025
109d ago
🆔90599296

First comprehensive framework for how AI agents actually improve through adaptation. While there is a lot of hype about building bigger models, the research reveals a different lever: systematic adaptation of agents and their tools. Researchers from many universities surveyed the rapidly expanding landscape of agentic AI adaptation. What they found: a fragmented field with no unified understanding of how agents learn to use tools, when to adapt the agent versus the tool, and which strategies work for which scenarios. These are all important for building production-ready AI agents. Adaptation in agentic AI follows four distinct paradigms that most practitioners conflate or ignore entirely. The framework organizes all adaptation strategies into two dimensions. > Agent Adaptation (A1, A2): modifying the agent's parameters, representations, or policies. > Tool Adaptation (T1, T2): optimizing external components like retrievers, planners, and memory modules while keeping the agent frozen. Let's discuss each in more detail: A1: Tool Execution Signaled Agent Adaptation. The agent learns from verifiable outcomes produced by tools it invokes. This involves code sandbox results, retrieval relevance scores, and API call outcomes. Methods like Toolformer, ToolLLM, and DeepRetrieval also fall here. The signal comes from whether the tool execution succeeded, not whether the final answer was correct. A2: Agent Output Signaled Agent Adaptation. The agent optimizes based on evaluations of its own final outputs. This includes both tool-free reasoning (DeepSeek-R1, Kimi-1.5) and tool-augmented adaptation (ReTool, Search-R1). The signal comes from answer correctness or preference scores, not intermediate tool calls. T1: Agent-Agnostic Tool Adaptation. This involves tools trained independently of any specific agent, including HuggingGPT, ViperGPT, and classic ML tools that serve as plug-and-play modules. These tools generalize well across different agents but may not be optimized for any particular one. T2: Agent-Supervised Tool Adaptation. Tools adapted using signals from a frozen agent's outputs. Includes reward-driven retriever tuning, adaptive search subagents, and memory-update modules like Reflexion and Memento. The agent stays fixed while tools learn to better support its reasoning. The trade-offs between paradigms are explicit. Cost and flexibility: A1/A2 require substantial compute for training billion-parameter models but offer maximal flexibility. T1/T2 optimize external components at a lower cost but may hit ceilings set by the frozen agent's capabilities. Generalization patterns differ significantly. T1 tools trained on broad distributions generalize well across agents and tasks. A1 methods risk overfitting to specific environments unless carefully regularized. T2 approaches enable independent tool upgrades without agent retraining, facilitating continuous improvement. The researchers identify when each paradigm fits. A1 suits scenarios with verifiable tool outputs like code execution or database queries. A2 works when only the final answer quality matters. T1 applies when tools must serve multiple agents. T2 excels when the agent is fixed, but tool performance is the bottleneck. State-of-the-art systems increasingly combine paradigms. A deep research system might use T1-style pretrained retrievers, T2-style adaptive search agents trained via frozen LLM feedback, and A1-style reasoning agents fine-tuned with execution feedback in a cascaded architecture. Four open challenges remain unsolved: - Co-adaptation: jointly optimizing agents and tools remains underexplored. - Continual adaptation: enabling lifelong learning without catastrophic forgetting. - Safe adaptation: preventing harmful behaviors during optimization. - Efficient adaptation: reducing computational costs while maintaining performance. The choice of adaptation paradigm fundamentally shapes what an agentic system can learn, how fast it improves, and whether improvements transfer across tasks. Teams building production agents need a principled framework for these decisions, not ad-hoc choices. Report: https://t.co/o2KPQLLQsZ Learn to build effective AI agents in our academy: https://t.co/g1Ijo0S5AA

Media 1Media 2
+1 more
🖼️ Media
C
ClementDelangue
@ClementDelangue
📅
Dec 13, 2025
109d ago
🆔99176235

This weekend, we’re shipping 3,000 Reachy Minis all over the world! To my knowledge, it’s one of the largest shipments of AI robots of the year (or ever?) just in time for Christmas! If you’re in this batch, expect to receive an email in the coming days. Keep in mind that this first version is designed for AI builders, so it’s very bare-bones. At the moment, it has very little software on it, and, like most early hardware, will likely have a fair number of bugs and quirks. Right now, it’s much more of an open-source, DIY robotics platform than a polished consumer robot. The beautiful thing is that we’ve already seen community members managing to hack cool apps and help to improve Reachy Mini a lot! If you’re not in this batch or haven’t bought a Reachy Mini yet, you can expect delivery by the end of January, or roughly 90 days after your purchase (we’re actively working to shorten that timeline). In the meantime, you can start building for current or future Reachy owners using the simulator in the GitHub repo, as all of the software is open-source. Congrats to the @pollenrobotics @huggingface @seeedstudio teams that worked extremely hard to get to this milestone, just 5 months after the beginning of the project! I'm terribly excited to see what you’ll all build with this! Let’s go open and collaborative AI robotics!

Media 1
🖼️ Media
U
UnderSecE
@UnderSecE
📅
Dec 13, 2025
109d ago
🆔13789129

Yesterday, we launched PAX SILICA, a partnership built for the AI age. This is the first time countries are organizing around compute, silicon, minerals and manufacturing as shared strategic assets. President Trump, Secretary Rubio and @davidsacks47 understood earlier than anyone that AI is the new backbone of economic power. This declaration operationalizes that insight. Building off of President Trump’s landmark AI Action Plan, this is American AI diplomacy at its best: building flexible coalitions, shaping markets, and putting AMERICA FIRST! 🇺🇸

Media 1Media 2
+2 more
🖼️ Media
G
GergelyOrosz
@GergelyOrosz
📅
Dec 11, 2025
110d ago
🆔05066755

I love Substack. Always have. Their team is great. But a silent change could force me off the platform if it stays. They broke email. My paid subscribers cannot read today's paid newsletter on mobile without downloading the Substack app. @SubstackInc: roll this back. Now. https://t.co/cY4z3MMctV

Media 1
🖼️ Media
J
joemasilotti
@joemasilotti
📅
Dec 12, 2025
109d ago
🆔58270578

Free Hotwire Native starter kit, anyone? 📂 Simplified, modern project organization 🏷️ Three tabs configured in a single place 🛜 Dynamic endpoint resolution (dev vs. prod) 🔀 Forms presented modally (via path configuration) 🌁 Bridge Components installed (& examples) 💎 More! https://t.co/BQT4j5m042

Media 1
🖼️ Media
E
emollick
@emollick
📅
Dec 13, 2025
109d ago
🆔51416310

It is hard to separate AI economic impact out from other factors, but this is a clever attempt to get at the effect of AI on entrepreneurship by looking at which areas of China were more likely to adopt AI early. They estimate that ChatGPT led to a 6% increase in new startups. https://t.co/Aunf0cczUW

Media 1Media 2
🖼️ Media
J
jxnlco
@jxnlco
📅
Dec 13, 2025
109d ago
🆔06296115

I have returned from the sea! https://t.co/Trmt54o3l6

Media 1
🖼️ Media
H
HuggingPapers
@HuggingPapers
📅
Dec 13, 2025
109d ago
🆔51836397

NVIDIA just released a highly efficient gpt-oss-120b Eagle3 model on Hugging Face. This quantized MoE model uses speculative decoding for top-tier throughput. https://t.co/4IjOhHfJ2z

Media 1
🖼️ Media
R
rasbt
@rasbt
📅
Dec 13, 2025
109d ago
🆔67117736

Just updated the Big LLM Architecture Comparison article... ...it grew quite a bit since the initial version in July 2025, more than doubled! https://t.co/oEt8XzNxik https://t.co/RZuwp6ZUaF

Media 1
🖼️ Media
N
NativeGaming
@NativeGaming
📅
Jun 30, 2023
1005d ago
🆔75985927

You may notice a new face on the stage today @HCS Sentinels❌ SSG Who? 🛰 T1? Do they even play Halo? Welcome @Kuhlect_ to Native White ⚪🫡 https://t.co/6Y7VI5aD2s

Media 1
🖼️ Media
C
Cameron6lack142
@Cameron6lack142
📅
Dec 12, 2025
110d ago
🆔07714905

Black foreigners and White Settlers. Everyone is united except for Black South Africans! https://t.co/vezuJvQGnX

@

Media 1
🖼️ Media
H
hardmaru
@hardmaru
📅
Dec 13, 2025
109d ago
🆔79519081

Sakana AIの技術を実社会へ。Applied Teamを急拡大中です。現在、Applied Research Engineer を積極採用しています。来年の新たな挑戦に向け、ぜひ年内にご応募ください!🚀 https://t.co/FuEoI2xrzS

@SakanaAILabs • Sat Dec 13 01:58

Sakana AIでは、世界最先端の自律型エージェント技術を駆使して、未踏のソリューション開発に挑むApplied Research Engineerを募集中です。 技術の社会実装をさらに加速させる、コアメンバーとしての参画をお待ちしています🚀 https://t.co/eQ7e0rIOmg (正社員・学生インターン問わず歓迎です✨) https://t.co/jnpuUp0ajJ

Media 1
🖼️ Media
← PreviousPage 288 of 557Next →