Your curated collection of saved posts and media
Can AI agents conduct advancedย cyber-attacksย autonomously? We tested seven models released between August 2024 andย Februaryย 2026 on two custom-built cyber rangesย designed to replicate complex attack environments. Hereโsย what we found๐งต https://t.co/rFRkOQu8yU
Can AI agents conduct advancedย cyber-attacksย autonomously? We tested seven models released between August 2024 andย Februaryย 2026 on two custom-built cyber rangesย designed to replicate complex attack environments. Hereโsย what we found๐งต https://t.co/rFRkOQu8yU
Should there be a Stack Overflow for AI coding agents to share learnings with each other? Last week I announced Context Hub (chub), an open CLI tool that gives coding agents up-to-date API documentation. Since then, our GitHub repo has gained over 6K stars, and we've scaled from under 100 to over 1000 API documents, thanks to community contributions and a new agentic document writer. Thank you to everyone supporting Context Hub! OpenClaw and Moltbook showed that agents can use social media built for them to share information. In our new chub release, agents can share feedback on documentation โ what worked, what didn't, what's missing. This feedback helps refine the docs for everyone, with safeguards for privacy and security. We're still early in building this out. You can find details and configuration options in the GitHub repo. Install chub as follows, and prompt your coding agent to use it: npm install -g @aisuite/chub GitHub: https://t.co/OCkyxXQMCq
Holy shit...Someone built an AI system that takes a research idea and outputs a full academic paper. Real citations. Real experiments. Conference-ready LaTeX. Zero human input. It's called AutoResearchClaw. And the pipeline is insane. Here's what actually happens when you type one command: It searches arXiv and Semantic Scholar for real papers. Not fake citations actual literature with 4-layer verification: arXiv ID check, CrossRef DOI lookup, Semantic Scholar title match, and LLM relevance scoring. Hallucinated references get killed automatically. Then it designs and runs real experiments. Hardware-aware auto-detects whether you have NVIDIA CUDA, Apple MPS, or just CPU, and adapts the code accordingly. When experiments fail, it self-heals. When results don't support the hypothesis, it pivots to a new direction on its own. Then it writes the paper. 5,000-6,500 words. Section by section. Multi-agent peer review with methodology-evidence consistency checks. Then it revises based on those reviews. Then it outputs conference-ready LaTeX. NeurIPS, ICML, ICLR templates. Compile-ready for Overleaf. BibTeX references auto-pruned to match inline citations. The whole thing runs across 23 stages and 8 phases. Three human-approval gates if you want them. Or just pass --auto-approve and walk away. What you get back: โ Full academic paper draft โ Conference-ready LaTeX + BibTeX โ Experiment code + sandbox results + charts โ Peer review notes โ Verification report on every citation This is what autonomous scientific research actually looks like in 2026. 100% Opensource. MIT License. Link in comments.
NanoVDR Distilling a 2B Vision-Language Retriever into a 70M Text-Only Encoder for Visual Document Retrieval paper: https://t.co/T0lh9v5Tnr https://t.co/rGoXKRzIQo

Introducing ๐จ๐๐๐๐๐๐๐๐ ๐น๐๐๐๐ ๐๐๐๐: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. ๐น Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. ๐น Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. ๐น Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. ๐น Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. ๐Full report: https://t.co/u3EHICG05h

Introducing ๐จ๐๐๐๐๐๐๐๐ ๐น๐๐๐๐ ๐๐๐๐: Rethinking depth-wise aggregation. Residual connections have long relied on fixed, uniform accumulation. Inspired by the duality of time and depth, we introduce Attention Residuals, replacing standard depth-wise recurrence with learned, input-dependent attention over preceding layers. ๐น Enables networks to selectively retrieve past representations, naturally mitigating dilution and hidden-state growth. ๐น Introduces Block AttnRes, partitioning layers into compressed blocks to make cross-layer attention practical at scale. ๐น Serves as an efficient drop-in replacement, demonstrating a 1.25x compute advantage with negligible (<2%) inference latency overhead. ๐น Validated on the Kimi Linear architecture (48B total, 3B activated parameters), delivering consistent downstream performance gains. ๐Full report: https://t.co/u3EHICG05h
I (finally) put together a new LLM Architecture Gallery that collects the architecture figures all in one place! https://t.co/NO7z6XSRHS https://t.co/X41FrK4i94
Karpathy asked. I delivered. Introducing OpenSquirrel! Written in pure rust with GPUI (same as zed) but with agents as central unit rather than files. Supports Claude Code, Codex, Opencode, and Cursor (cli). This really forced me to think up the UI/UX from first principles instead of relying on common electron slop. https://t.co/NQG1jvgbk5
Karpathy asked. I delivered. Introducing OpenSquirrel! Written in pure rust with GPUI (same as zed) but with agents as central unit rather than files. Supports Claude Code, Codex, Opencode, and Cursor (cli). This really forced me to think up the UI/UX from first principles instead of relying on common electron slop. https://t.co/NQG1jvgbk5
Big release from Kimi! They just released a new way to handle residual connections in Transformers. In a standard Transformer, every sub-layer (attention or MLP) computes an output and adds it back to the input via a residual connection. If you consider this across 40+ layers, the hidden state at any layer is just the equal-weighted sum of all previous layer outputs. Every layer contributes with weight=1, so every layer gets equal importance. This creates a problem called PreNorm dilution, where as the hidden state accumulates layer after layer, its magnitude grows linearly with depth. And any new layer's contribution gets progressively buried in the already-massive residual. This means deeper layers are then forced to produce increasingly large outputs just to have any influence, which destabilizes training. Here's what the Kimi team observed and did: RNNs compress all prior token information into a single state across time, leading to problems with handling long-range dependencies. And residual connections compress all prior layer information into a single state across depth. Transformers solved the first problem by replacing recurrence with attention. This was applied along the sequence dimension. Now they introduced Attention Residuals, which applies a similar idea to depth. Instead of adding all previous layer outputs with a fixed weight of 1, each layer now uses softmax attention to selectively decide how much weight each previous layer's output should receive. So each layer gets a single learned query vector, and it attends over all previous layer outputs to compute a weighted combination. The weights are input-dependent, so different tokens can retrieve different layer representations based on what's actually useful. This is Full Attention Residuals (shown in the second diagram below). But here's the practical problem with this idea. Full AttnRes requires keeping all layer outputs in memory and communicating them across pipeline stages during distributed training. To solve this, they introduce Block Attention Residuals (shown in the third diagram below). The idea is to group consecutive layers into roughly 8 blocks. Within each block, layer outputs are summed via standard residuals. But across blocks, the attention mechanism selectively combines block-level representations. This drops memory from O(Ld) to O(Nd), where N is the number of blocks. Layers within the current block can also attend to the partial sum of what's been computed so far inside that block, so local information flow isn't lost. And the raw token embedding is always available as a separate source, which means any layer in the network can selectively reach back to the original input. Results from the paper: - Block AttnRes matches the loss of a baseline LLM trained with 1.25x more compute. - Inference latency overhead is less than 2%, making it a practical drop-in replacement - On a 48B parameter Kimi Linear model (3B activated) trained on 1.4T tokens, it improved every benchmark they tested: GPQA-Diamond +7.5, Math +3.6, HumanEval +3.1, MMLU +1.1 The residual connection has mostly been unchanged since ResNet in 2015. This might be the first modification that's both theoretically motivated and practically deployable at scale with negligible overhead. More details in the post below by Kimi๐ ____ Find me โ @_avichawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
Big release from Kimi! They just released a new way to handle residual connections in Transformers. In a standard Transformer, every sub-layer (attention or MLP) computes an output and adds it back to the input via a residual connection. If you consider this across 40+ layers, the hidden state at any layer is just the equal-weighted sum of all previous layer outputs. Every layer contributes with weight=1, so every layer gets equal importance. This creates a problem called PreNorm dilution, where as the hidden state accumulates layer after layer, its magnitude grows linearly with depth. And any new layer's contribution gets progressively buried in the already-massive residual. This means deeper layers are then forced to produce increasingly large outputs just to have any influence, which destabilizes training. Here's what the Kimi team observed and did: RNNs compress all prior token information into a single state across time, leading to problems with handling long-range dependencies. And residual connections compress all prior layer information into a single state across depth. Transformers solved the first problem by replacing recurrence with attention. This was applied along the sequence dimension. Now they introduced Attention Residuals, which applies a similar idea to depth. Instead of adding all previous layer outputs with a fixed weight of 1, each layer now uses softmax attention to selectively decide how much weight each previous layer's output should receive. So each layer gets a single learned query vector, and it attends over all previous layer outputs to compute a weighted combination. The weights are input-dependent, so different tokens can retrieve different layer representations based on what's actually useful. This is Full Attention Residuals (shown in the second diagram below). But here's the practical problem with this idea. Full AttnRes requires keeping all layer outputs in memory and communicating them across pipeline stages during distributed training. To solve this, they introduce Block Attention Residuals (shown in the third diagram below). The idea is to group consecutive layers into roughly 8 blocks. Within each block, layer outputs are summed via standard residuals. But across blocks, the attention mechanism selectively combines block-level representations. This drops memory from O(Ld) to O(Nd), where N is the number of blocks. Layers within the current block can also attend to the partial sum of what's been computed so far inside that block, so local information flow isn't lost. And the raw token embedding is always available as a separate source, which means any layer in the network can selectively reach back to the original input. Results from the paper: - Block AttnRes matches the loss of a baseline LLM trained with 1.25x more compute. - Inference latency overhead is less than 2%, making it a practical drop-in replacement - On a 48B parameter Kimi Linear model (3B activated) trained on 1.4T tokens, it improved every benchmark they tested: GPQA-Diamond +7.5, Math +3.6, HumanEval +3.1, MMLU +1.1 The residual connection has mostly been unchanged since ResNet in 2015. This might be the first modification that's both theoretically motivated and practically deployable at scale with negligible overhead. More details in the post below by Kimi๐ ____ Find me โ @_avichawla Every day, I share tutorials and insights on DS, ML, LLMs, and RAGs.
Can Vision-Language Models Solve the Shell Game? paper: https://t.co/k7dczlIAIm https://t.co/k0laIhSZhT
Great paper on automating agent skill acquisition.
Your AI agent can now generate videos. PixVerse CLI ships today โ JSON output, 6 deterministic exit codes, full PixVerse v5.6, Sora2 and Veo 3.1, Nano Banana access from terminal. Same account. Same credits. No new signup. -> Follow+ Reply+RT = 300 Creds(72H ONLY)
่ฏดๅฅๅฟ้่ฏ๏ผCodex ๆฏ Claude code ๅผบๅพๅค ไนๅฏ่ฝๆฏๆๅ Swift ๆๅ ณ๏ผไฝๅฎๆฏๆฌก้ฝ่ฝ้ป้ปๅนฒๅพ้ฟๆถ้ด๏ผ่ไธๆฏๆฌก้ฝๅ ไนๆญฃ็กฎ ็ธๅ Claude code๏ผไธไธๅญ้ฎ่ฟไธชๆ้๏ผ็ถๅไนๆฒกๆๆไบๆ ไธๆฌกๆงๅๅฅฝ ๅทฒ็ปไธฅ้ๅฝฑๅๆ๏ผๅทๆ้ณไบ ่ไธCodex ๆๅพไพฟๅฎ็ๆญฃ็ๆนๆก๏ผไฝ Claude code ๆฒกๆใ
่ฏดๅฅๅฟ้่ฏ๏ผCodex ๆฏ Claude code ๅผบๅพๅค ไนๅฏ่ฝๆฏๆๅ Swift ๆๅ ณ๏ผไฝๅฎๆฏๆฌก้ฝ่ฝ้ป้ปๅนฒๅพ้ฟๆถ้ด๏ผ่ไธๆฏๆฌก้ฝๅ ไนๆญฃ็กฎ ็ธๅ Claude code๏ผไธไธๅญ้ฎ่ฟไธชๆ้๏ผ็ถๅไนๆฒกๆๆไบๆ ไธๆฌกๆงๅๅฅฝ ๅทฒ็ปไธฅ้ๅฝฑๅๆ๏ผๅทๆ้ณไบ ่ไธCodex ๆๅพไพฟๅฎ็ๆญฃ็ๆนๆก๏ผไฝ Claude code ๆฒกๆใ
Banger report from the Kimi team: Attention Residuals Residual connections made deep Transformers trainable. But they also force uncontrolled hidden-state growth with depth. This work proposes a cleaner alternative. It introduces Attention Residuals, which replace fixed residual accumulation with softmax attention over previous layer outputs. Instead of blindly summing everything, each layer selectively retrieves the earlier representations it actually needs. To keep this practical at scale, they add a blockwise version that compresses layers into block summaries, recovering most of the gains with minimal systems overhead. Why does it matter? Residual paths have barely changed across modern LLMs, even though they govern how information moves through depth. This paper shows that making the mixing content-dependent improves scaling laws, matches a baseline trained with 1.25x more compute, boosts GPQA-Diamond by +7.5 and HumanEval by +3.1, while keeping inference overhead under 2%. Paper: https://t.co/04IG6FDiVr Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
GitHub already has millions of repos full of procedural knowledge. The work introduces a framework for extracting agent skills directly from open-source repos. The pipeline analyzes repo structure, identifies procedural knowledge through dense retrieval, and translates it into standardized SKILL.md format with a progressive disclosure architecture so agents can discover thousands of skills without context window degradation. Manually authoring agent skills doesn't scale. Automated extraction achieved 40% gains in knowledge transfer efficiency while matching human-crafted quality. Still early on this, and there is more work needed for self-discovered and self-improving skills to work well at scale. As the agent skill ecosystem grows, mining existing repos could unlock scalable capability acquisition without having to retrain models. Paper: https://t.co/MAt8Goetcr Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c

Banger report from the Kimi team: Attention Residuals Residual connections made deep Transformers trainable. But they also force uncontrolled hidden-state growth with depth. This work proposes a cleaner alternative. It introduces Attention Residuals, which replace fixed residual accumulation with softmax attention over previous layer outputs. Instead of blindly summing everything, each layer selectively retrieves the earlier representations it actually needs. To keep this practical at scale, they add a blockwise version that compresses layers into block summaries, recovering most of the gains with minimal systems overhead. Why does it matter? Residual paths have barely changed across modern LLMs, even though they govern how information moves through depth. This paper shows that making the mixing content-dependent improves scaling laws, matches a baseline trained with 1.25x more compute, boosts GPQA-Diamond by +7.5 and HumanEval by +3.1, while keeping inference overhead under 2%. Paper: https://t.co/04IG6FDiVr Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
๐EgoEdit @Snapchat has been accepted to CVPR 2026! ๐๐ป We are bringing high-quality, real-time editing to egocentric videos. Our massive 100k video dataset and benchmark are ALREADY PUBLIC! ๐๐ ๐ Project Page: https://t.co/cEUZRxdLDf ๐ค Dataset: https://t.co/qCFRTY8cYG https://t.co/VuXQg2UfqC
๐EgoEdit @Snapchat has been accepted to CVPR 2026! ๐๐ป We are bringing high-quality, real-time editing to egocentric videos. Our massive 100k video dataset and benchmark are ALREADY PUBLIC! ๐๐ ๐ Project Page: https://t.co/cEUZRxdLDf ๐ค Dataset: https://t.co/qCFRTY8cYG https://t.co/VuXQg2UfqC