Your curated collection of saved posts and media

Showing 32 posts ยท last 14 days ยท by score
N
native_info
@native_info
๐Ÿ“…
Dec 05, 2018
2667d ago
๐Ÿ†”79127552

ใ€ 12ๆœˆ19ๆ—ฅ(ๆฐด) ๅ—ๆณจ็ท ๅˆ‡ ใ€‘ momiๆฐใฎๆใ็ˆฝใ‚„ใ‹็พŽๅฐ‘ๅฅณใ€Œไธƒๆตท ่‘ตใ€ใ‚’ใƒญใ‚ฑใƒƒใƒˆใƒœใƒผใ‚คใŒๅฟ ๅฎŸใซ็ซ‹ไฝ“ๅŒ–๏ผ ใƒœใƒ‡ใ‚ฃใฎไบคๆ›ใซใ‚ˆใ‚Šใ€ๆฐด็€ใฎ้ฃŸใ„่พผใฟ่กจ็พใจ่งฃๆ”พ็Šถๆ…‹ใฎไธกๆ–นใ‚’ๅ†็พๅฏ่ƒฝใซใ„ใŸใ—ใพใ—ใŸใ€‚ https://t.co/nRqEPywzKj https://t.co/ulxa5KmFVw

Media 1Media 2
+1 more
๐Ÿ–ผ๏ธ Media
N
native_info
@native_info
๐Ÿ“…
Feb 10, 2019
2600d ago
๐Ÿ†”90893568

ใ€Žๅณถ็”ฐใƒ•ใƒŸใ‚ซใƒใ€ใ•ใ‚“ใ‚คใƒฉใ‚นใƒˆใ€ŒใƒŽใƒผใƒฉใ€ ๅŽŸๅž‹ๅˆถไฝœ๏ผšใ‚ขใƒ“ใƒฉ #wf2019w #ใƒใ‚คใƒ†ใ‚ฃใƒ– https://t.co/EsfqPyHxxP

Media 1Media 2
+2 more
๐Ÿ–ผ๏ธ Media
N
Native3rd
@Native3rd
๐Ÿ“…
Apr 24, 2021
1796d ago
๐Ÿ†”60318471

โ€œThose who know how to maintain silence, knows how to maintain everything.โ€ ~>N.Namdeo https://t.co/YM1tmM35dm

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
NativeMag
@NativeMag
๐Ÿ“…
Sep 02, 2021
1665d ago
๐Ÿ†”97243392

๐Ÿšจ NEW DIGITAL COVER ALERT๐Ÿšจ The NATIVE Presents: Sounds From ๐“ฃ๐“ฑ๐“ฒ๐“ผ Side featuring: Street Pop 3.0 ๐Ÿ‡ณ๐Ÿ‡ฌ Amapiano ๐Ÿ‡ฟ๐Ÿ‡ฆ Asakaa Drill ๐Ÿ‡ฌ๐Ÿ‡ญ FULL STORY: https://t.co/Ka8lhCfueu https://t.co/9ETnVgdVJd

Media 1Media 2
+7 more
๐Ÿ–ผ๏ธ Media
N
Nativetoday_
@Nativetoday_
๐Ÿ“…
Mar 30, 2023
1091d ago
๐Ÿ†”25913349

GREAT PHOTO OF >ADAM BEACH< HAVE A BLESSED WEEKEND BROTHER https://t.co/aSGWjCReBM

Media 1Media 2
๐Ÿ–ผ๏ธ Media
H
HamelHusain
@HamelHusain
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”03048575

Found old @modal swag. Smells good! https://t.co/Ld3VV0RzRI

Media 1Media 2
๐Ÿ–ผ๏ธ Media
G
github
@github
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”61716460

Looking for a festive Yule log to brighten up your terminal? Youโ€™ll love @leereillyโ€™s GitHub CLI extension that gives you a cozy, animated Git log. ๐Ÿ”ฅ ๐Ÿชต https://t.co/2oMEsTkEMP https://t.co/bxhmd254GE

+1 more
๐Ÿ–ผ๏ธ Media
A
AnthropicAI
@AnthropicAI
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”24619581

Weโ€™re releasing Bloom, an open-source tool for generating behavioral misalignment evals for frontier AI models. Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios. Learn more: https://t.co/TwKstpLSy3

Media 1
๐Ÿ–ผ๏ธ Media
S
Sanemavcil
@Sanemavcil
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”49610773

โ€˜Logan was terrified for Jakeโ€™ โ€” and honestlyโ€ฆ you can feel the tension in his face. What do you think โ€” genuine fear, or just a bad freeze-frame/angle? ๐ŸฅŠ๐Ÿ‘€ @LoganPaul @jakepaul https://t.co/Hku9WLy1GC

๐Ÿ–ผ๏ธ Media
J
joshuihuii
@joshuihuii
๐Ÿ“…
Jan 03, 2024
812d ago
๐Ÿ†”92129976

Nana conference Attacca conference https://t.co/pcGv8RkuQP

Media 1Media 2
๐Ÿ–ผ๏ธ Media
D
dair_ai
@dair_ai
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”89649084

RAG systems struggle with multi-hop reasoning. In most cases, the problem isn't the LLMs. It's the retrieval system. Standard RAG treats each piece of evidence as equally reliable, ignoring how documents connect to each other. Why is this a problem? When questions require reasoning across multiple sources, single-shot retrieval often misses "bridge" documents whose entities aren't mentioned in the original query. Iterative retrieval helps, but it introduces new issues: LLM-guided graph traversal can hallucinate or become stuck on partial reasoning from previous steps. This new research introduces SA-RAG, a framework that applies spreading activation, a mechanism from cognitive psychology, to knowledge-graph-based retrieval. How does it work? Instead of relying on the LLM to decide which documents to fetch next, activation propagates automatically through a knowledge graph. Starting from entities matched to the query, activation spreads outward through weighted connections, with strength diminishing over distance. Documents linked to highly activated entities get retrieved. The system builds a hybrid structure during indexing. An LLM extracts entities and relationships from text chunks, creating a knowledge graph where documents connect to entities through "describes" links. At query time, seed entities are identified by embedding similarity, then activation flows through the graph in a breadth-first manner. On MuSiQue, SA-RAG alone achieves 67% answer correctness with phi4, outperforming naive RAG at 45% and CoT-based iterative retrieval at 55%. When combined with chain-of-thought iterative retrieval, it reaches 74% on MuSiQue and 87% on 2WikiMultiHopQA. This system demonstrates a 25% to 39% absolute improvement over naive RAG across benchmarks. Notably, these results come from small, open-weight models like phi4 and gemma3, which require no fine-tuning. Spreading activation captures associative relevance rather than surface-level similarity. The method works as a plug-and-play module, boosting any training-free RAG pipeline without architectural changes. Paper: https://t.co/jLZLkacDAX Learn to build effective RAG and AI agents in our academy: https://t.co/zQXQt0PMbG

Media 1Media 2
๐Ÿ–ผ๏ธ Media
O
omarsar0
@omarsar0
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”56958572

Check out the other skill examples in the repo. https://t.co/Oj3Oh5kJ9v

Media 1
๐Ÿ–ผ๏ธ Media
I
ivanleomk
@ivanleomk
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”26205307

@gr00vyfairy You can also use it to create what I think is the best OG image ever https://t.co/0D8Hr798NW

Media 1Media 2
๐Ÿ–ผ๏ธ Media
R
readswithravi
@readswithravi
๐Ÿ“…
Dec 19, 2025
96d ago
๐Ÿ†”52215830

Action produces information. https://t.co/MMP7wfuWCw

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”RealEricD retweeted
R
Reads with Ravi
@readswithravi
๐Ÿ“…
Dec 19, 2025
96d ago
๐Ÿ†”52215830

Action produces information. https://t.co/MMP7wfuWCw

Media 1Media 2
โค๏ธ6,813
likes
๐Ÿ”987
retweets
๐Ÿ–ผ๏ธ Media
O
omarsar0
@omarsar0
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”03929759

Skills is now officially supported in Codex. There is a neat built-in skill for planning. This is the best way to pull in the right context at the right time. Also, a great way to build highly specialized skills for your coding agents. https://t.co/fviSC12aci

Media 1
๐Ÿ–ผ๏ธ Media
J
jxnlco
@jxnlco
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”76456841

Me and the homies. https://t.co/pU9i20ako1

๐Ÿ–ผ๏ธ Media
V
vxylily
@vxylily
๐Ÿ“…
Dec 19, 2025
96d ago
๐Ÿ†”10225938

What are you buying? https://t.co/UsipE27Stv

Media 1
๐Ÿ–ผ๏ธ Media
D
DataChaz
@DataChaz
๐Ÿ“…
Dec 19, 2025
96d ago
๐Ÿ†”30962734

This is wild. A real-time webcam demo using SmolVLM from @huggingface and llama.cpp! ๐Ÿคฏ Running fully local on a MacBook M3. https://t.co/BQ1HyP7RoC

๐Ÿ–ผ๏ธ Media
R
rasbt
@rasbt
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”66188246

I really didn't expect another major open-weight LLM release this December, but here we go: NVIDIA released their new Nemotron 3 series this week. It comes in 3 sizes: 1. Nano (30B-A3B), 2. Super (100B), 3. and Ultra (500B). Architecture-wise, the models are a Mixture-of-Experts (MoE) Mamba-Transformer hybrid architecture. As of this morning (Dec 19), only the Nano model has been released as an open-weight model, so this post will focus on that one (shown in my drawing below). Nemotron 3 Nano (30B-A3B) is a 52-layer hybrid Mamba-Transformer model that interleaves Mamba-2 sequence-modeling blocks with sparse Mixture-of-Experts (MoE) feed-forward layers, and uses self-attention only in a small subset of layers. Thereโ€™s a lot going on in the figure above, but in short, the architecture is organized into 13 macro blocks with repeated Mamba-2 โ†’ MoE sub-blocks, plus a few Grouped-Query Attention layers. In total, if we multiply the macro- and sub-blocks, there are 52 layers in this architecture. Regarding the MoE modules, each MoE layer contains 128 experts but activates only 1 shared and 6 routed experts per token. The Mamba-2 layers would take a whole article itself to explain (perhaps a topic for another time). But for now, conceptually, you can think of them as similar to the Gated DeltaNet approach that Qwen3-Next and Kimi-Linear use, which I covered in my Beyond Standard LLMs article. The similarity between Gated DeltaNet and Mamba-2 layers is that both replace standard attention with a gated-state-space update. The idea behind this state-space-style module is that it maintains a running hidden state and mixes new inputs via learned gates. In contrast to attention, it scales linearly instead of quadratically with the input sequence length. Whatโ€™s actually quite exciting about this architecture is its really good performance compared to pure transformer architectures of similar size (like Qwen3-30B-A3B-Thinking-2507 and GPT-OSS-20B-A4B), while achieving much higher tokens-per-second throughput. Overall, this is an interesting direction, even more extreme than Qwen3-Next and Kimi-Linear in its use of only a few attention layers. However, one of the strengths of the transformer architecture is its performance at a (really) large scale. I am curious to see how the larger Nemotron 3 Super and especially Ultra will compare to the likes of DeepSeek V3.2.

@rasbt โ€ข Sat Dec 13 14:21

Just updated the Big LLM Architecture Comparison article... ...it grew quite a bit since the initial version in July 2025, more than doubled! https://t.co/oEt8XzNxik https://t.co/RZuwp6ZUaF

Media 1
๐Ÿ–ผ๏ธ Media
A
adamwathan
@adamwathan
๐Ÿ“…
Dec 18, 2025
97d ago
๐Ÿ†”54058543

https://t.co/gBgot9C1ZV

Media 1
๐Ÿ–ผ๏ธ Media
I
ivanleomk
@ivanleomk
๐Ÿ“…
Dec 20, 2025
95d ago
๐Ÿ†”90560126

https://t.co/1VkhHOw1X6

@adamwathan โ€ข Thu Dec 18 10:13

https://t.co/gBgot9C1ZV

Media 1
๐Ÿ–ผ๏ธ Media
N
Native3rd
@Native3rd
๐Ÿ“…
Nov 04, 2022
1237d ago
๐Ÿ†”31204354

Believe so brightly that everyone sees the beauty in believing. ~ Native American ๐Ÿชถโœจ https://t.co/PLFFDE0g86

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
Nativetoday_
@Nativetoday_
๐Ÿ“…
Apr 02, 2023
1088d ago
๐Ÿ†”49323521

Native Beauty ๐ŸŒนโค๏ธโ€๐Ÿ”ฅโค๏ธโ€๐Ÿ”ฅ๐ŸŒน๐ŸŒน If you're a Native beauty fan of mine can I get a bigโ€ฆ.YESS !!! I love you Allโค๏ธ https://t.co/gG4VDMMcWq

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
Native3rd
@Native3rd
๐Ÿ“…
Feb 24, 2024
760d ago
๐Ÿ†”47919071

โ€œWe each want nothing more than to live for the moment! Nature hardwired us perpetually to follow the call of the wild, cull all the highs in life, and rejoice in life by dancing, singing, jumping, building nests, creating beauty, and playing with our young! We each find ourselves happiest when we are engaging in conduct that makes us feel Alive!โ€ ~ K. J. Oldster,

Media 1
๐Ÿ–ผ๏ธ Media
S
ssandra23
@ssandra23
๐Ÿ“…
Jul 04, 2021
1725d ago
๐Ÿ†”19099906

Thank you for links @snkr_twitr union X JORDAN ๐Ÿ“ธ by me! ๐Ÿค—๐Ÿ’Ÿ๐Ÿ’Ÿ๐Ÿ’Ÿ https://t.co/tV85BwEeta

Media 1Media 2
๐Ÿ–ผ๏ธ Media
I
IndigenousBeads
@IndigenousBeads
๐Ÿ“…
Sep 02, 2021
1665d ago
๐Ÿ†”34542082

Hรฃu mitakonabi! ๐Ÿ’• Macaลพe ne Jordy Ironstar, mitaguyabi Cรฉga Kโ€™inna eda hambi. Hello my friends! My name is @JordenIronstar & I am a Two Spirit bead artist from Carry the Kettle Nakoda Nation. โœจ I will be your host this week. So buckle up, itโ€™s going to be a bumpy ride ๐Ÿš— https://t.co/FnOMWq3M16

Media 1Media 2
๐Ÿ–ผ๏ธ Media
J
Jtootoo22
@Jtootoo22
๐Ÿ“…
Jun 21, 2022
1373d ago
๐Ÿ†”47826688

Happy National Indigenous Peoples Day ! #Nunavut https://t.co/SiOtm6vhx9

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
nareavera
@nareavera
๐Ÿ“…
Nov 13, 2023
863d ago
๐Ÿ†”18583792

mau kasih bunga tapi gamau modal โ€”โ€” cerita jordan https://t.co/As39P8ETTf

Media 1Media 2
+2 more
๐Ÿ–ผ๏ธ Media
N
nareavera
@nareavera
๐Ÿ“…
Dec 31, 2023
815d ago
๐Ÿ†”59671821

jordan jastip (lagi) โ€”โ€” jisung three tweets au https://t.co/U0j3XbiRRe

Media 1Media 2
+6 more
๐Ÿ–ผ๏ธ Media
L
LaNativePatriot
@LaNativePatriot
๐Ÿ“…
Aug 20, 2024
582d ago
๐Ÿ†”40687084

@jordanbpeterson @petersonacademy Iโ€™ll sign up if I get an in person interview with you I probably wonโ€™t wear this to itโ€ฆ. Probably https://t.co/sFkVmdosEk

Media 1Media 2
๐Ÿ–ผ๏ธ Media
D
Dostoevskyquot
@Dostoevskyquot
๐Ÿ“…
Dec 19, 2025
96d ago
๐Ÿ†”46323929

https://t.co/bM1AVlWGbk

Media 1
๐Ÿ–ผ๏ธ Media