Your curated collection of saved posts and media

Showing 32 posts ยท last 14 days ยท by score
N
Native3rd
@Native3rd
๐Ÿ“…
Mar 26, 2021
1832d ago
๐Ÿ†”13661699

โ€œThe way we think about ourselves will give rise to the world we live in.โ€ <>G Braden, Cherokee https://t.co/bC5mjm6R5h

Media 1
๐Ÿ–ผ๏ธ Media
N
Native3rd
@Native3rd
๐Ÿ“…
Jul 23, 2021
1712d ago
๐Ÿ†”57080070

โ€œGive me strength, not to be better than my enemies, but to defeat my greatest enemy, the doubts within myself. Give me strength for a straight back and clear eyes, so when life fades, as the setting sun, my spirit may come to you without shameโ€ -Cherokee https://t.co/ZNFo689DEC

Media 1
๐Ÿ–ผ๏ธ Media
N
nativedude8
@nativedude8
๐Ÿ“…
Oct 10, 2021
1633d ago
๐Ÿ†”34642436

Anybody else enjoy soft native cock? ๐Ÿ˜๐Ÿคค https://t.co/EPcM306JfG

Media 1
๐Ÿ–ผ๏ธ Media
N
Nativecherokke
@Nativecherokke
๐Ÿ“…
Jan 04, 2024
817d ago
๐Ÿ†”88028480

To day My birthday I know I'm not perfect๐Ÿฅฒ https://t.co/vuFgbBsYZi

Media 1
๐Ÿ–ผ๏ธ Media
T
tsheko2020
@tsheko2020
๐Ÿ“…
Jan 08, 2024
813d ago
๐Ÿ†”76401289

#UmkhokhaTheCurse Cannot pretend anymore i miss old Nobuntu ๐Ÿ˜ญ๐Ÿ˜ญ๐Ÿ˜ญ๐Ÿ˜ญ https://t.co/UCP32rY6h0

Media 1
๐Ÿ–ผ๏ธ Media
N
Nativecherokke
@Nativecherokke
๐Ÿ“…
Feb 03, 2024
787d ago
๐Ÿ†”13527713

If you support Native American culture Say......โYesโž https://t.co/lxPlpWghR5

Media 1
๐Ÿ–ผ๏ธ Media
N
nativelimpact
@nativelimpact
๐Ÿ“…
Feb 25, 2024
765d ago
๐Ÿ†”73220561

beat the getting unblockabled allegations https://t.co/ArhFsx3aHf

๐Ÿ–ผ๏ธ Media
O
omarsar0
@omarsar0
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”49273303

New benchmark from Google Research. Models get better at benchmarks, but do they actually get more factual? Previous evaluations focused on narrow slices: grounding to documents, answering from memory, or using search. A model excelling at one often fails at another. This new research introduces the FACTS Leaderboard, a comprehensive suite that measures factuality across four distinct dimensions: - FACTS Multimodal tests visual grounding combined with world knowledge on ~1,500 image-based questions. - FACTS Parametric assesses closed-book factoid recall using 2,104 adversarially-sampled questions that stumped open-weight models. - FACTS Search evaluates information-seeking with web tools across 1,884 queries including multi-hop reasoning. - FACTS Grounding v2 tests whether long-form responses stay faithful to provided documents. The aggregate FACTS Score averages performance across all four. Results: Gemini 3 Pro leads with 68.8% overall. Gemini 2.5 Pro follows at 62.1%, then GPT-5 at 61.8%. But the sub-scores tell a different story. Claude models are precision-oriented, achieving high no-contradiction rates but hedging frequently on parametric questions. Claude 4 Sonnet doesn't attempt 45.1% of parametric queries. GPT models show higher coverage but more contradictions. On multimodal, even the best models only reach ~47% accuracy when requiring both complete coverage and zero contradictions. On parametric knowledge, the spread is enormous: Gemini 3 Pro hits 76.4% while GPT-5 mini manages just 16.0%. The benchmark maintains both public and private splits to prevent overfitting. All evaluation runs through Kaggle with standardized search tools for fair comparison. A single factuality number hides crucial behavioral differences. Some models guess aggressively, others hedge conservatively. This suite exposes those tradeoffs across the contexts where factuality actually matters. Paper: https://t.co/TCHOSGlQKs Learn how to evaluate and build effective AI agents in our academy: https://t.co/JBU5beIoD0

Media 1
๐Ÿ–ผ๏ธ Media
A
anirudhg9119
@anirudhg9119
๐Ÿ“…
Dec 15, 2025
106d ago
๐Ÿ†”85089155

Best possible task accuracy after fixing constraints on inference process? Tokens across all generations, depth of generation chain (โ€œlatencyโ€), total context length and compute. PDR achieves pareto frontier: "draft in parallel โ†’ distill to a compact workspace โ†’ refine". https://t.co/jWoGu8EVdj

@dair_ai โ€ข Mon Dec 15 14:59

NEW Research from Meta Superintelligence Labs and collaborators. The default approach to improving LLM reasoning today remains extending chain-of-thought sequences. Longer reasoning traces aren't always better. Longer traces conflate reasoning depth with sequence length and inh

Media 1
๐Ÿ–ผ๏ธ Media
D
dair_ai
@dair_ai
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”39487018

NEW research on abstract reasoning. Frontier models like GPT-5 and Grok 4 still can't do what humans find trivially easy: infer transformation rules from a handful of examples. The default approach to solving ARC-AGI (the leading benchmark for abstract reasoning) treats these visual puzzles as pure text. Nested lists like [[0,1,2],[3,4,5]]. But that contradicts how humans actually solve these puzzles. This new research introduces Vision-Language Synergy Reasoning (VLSR), a framework that strategically combines visual and textual modalities for different reasoning stages. Vision and text have complementary strengths. Vision excels at global pattern recognition, providing a 3.0% improvement in rule summarization through holistic 2D perception. Text excels at precise execution, with vision causing a 20.5% performance drop on element-wise manipulation tasks. VLSR decomposes the problem accordingly. Phase 1: visualize example matrices as color-coded grids for rule summarization. Phase 2: switch to text for precise rule application. This is about matching the modality to the task. They also introduce Modality-Switch Self-Correction (MSSC), which breaks the confirmation bias that plagues text-only self-correction. After generating an answer textually, the system verifies it visually. Results across GPT-4o, Gemini-2.5-Pro, o4-mini, and Qwen3-VL: up to 7.25% improvement on Gemini, 4.5% on o4-mini over text-only baselines. Text-only self-correction often degrades performance across rounds. MSSC improves consistently at each iteration. The approach extends to fine-tuning. Vision-language synergy training achieves 13.25% on ARC-AGI with Qwen3-8B, outperforming text-only fine-tuning (9.75%) and closed-source baseline GPT-4o (8.25%) with a much smaller model. Abstract reasoning may require coordinated visual and linguistic processing, not either modality alone. This work shows that matching the modality to the reasoning stage, rather than forcing everything through text, unlocks consistent gains across models. Paper: https://t.co/cQZDUGCmjz Learn to build effective AI agents in our academy: https://t.co/zQXQt0PMbG

Media 1Media 2
๐Ÿ–ผ๏ธ Media
T
travelingflying
@travelingflying
๐Ÿ“…
Dec 15, 2025
106d ago
๐Ÿ†”23821884

The media is extremely racist against White people. Now, replace the word โ€œWhiteโ€ with โ€œBlackโ€ and imagine the outrage. This is not okay. Anti-White racism has to stop. https://t.co/E3NWW5KBAl

๐Ÿ–ผ๏ธ Media
G
gerardsans
@gerardsans
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”63951984

Unpopular opinion: Gradient descent and sampling wonโ€™t reach AGI. Metaโ€™s PDR paper quiet admissions behind shiny evals: The workspace is not persistent. Long-context failure modes. โ€œManualโ€ steering. Why is the transformer failing at every turn? Read: https://t.co/T4ib3pgCVg https://t.co/WlRx4Lo5D5

Media 1
๐Ÿ–ผ๏ธ Media
S
SwayStar123
@SwayStar123
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”09320918

Speedrunning ImageNet Diffusion - 360x faster training There have been many new techniques demonstrating convergence speedups compared to DiT in the past few years, however all of these have been studied in isolation, against increasingly outdated baselines. I present SR-DiT (SpeedrunDiT), which combines some of the best techniques into one new modern baseline

Media 1
๐Ÿ–ผ๏ธ Media
R
RachelTrue
@RachelTrue
๐Ÿ“…
Jan 27, 2019
2621d ago
๐Ÿ†”98286081

This is not about self absorption.. itโ€™s about racism, parity & ๐Ÿ’ฐ speaking up is costing me $ but may help younger folk coming up down the line achieve those things. https://t.co/oBVQkGslpZ

Media 1
๐Ÿ–ผ๏ธ Media
N
NPR
@NPR
๐Ÿ“…
Feb 08, 2019
2608d ago
๐Ÿ†”82426624

Are you a person of color who's encountered racist caricatures while you were in school โ€” either recently or in the past? @NPRWeekend wants to hear from you. https://t.co/QIxQmqnSG6

Media 1
๐Ÿ–ผ๏ธ Media
R
rockstxrmingi
@rockstxrmingi
๐Ÿ“…
Oct 25, 2020
1983d ago
๐Ÿ†”34866433

reject embrace modernity tradition https://t.co/FSxD18Qd1r

Media 1Media 2
๐Ÿ–ผ๏ธ Media
R
rafagrassetti
@rafagrassetti
๐Ÿ“…
May 18, 2021
1778d ago
๐Ÿ†”98731010

#NativelyDigital Original: Copy: https://t.co/dTYgIuSSsQ

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
Native3rd
@Native3rd
๐Ÿ“…
Feb 05, 2023
1150d ago
๐Ÿ†”40838150

โ€œIf you are driven by fear, anger or prideโ€ฆ Nature will force you to compete. If you are guided by courage, awareness, love, tranquility and peaceโ€ฆ Nature will serve you.โ€ ~ A. Ray, ๐Ÿชถโœจ #INDIGENOUS #NativeTwitter https://t.co/o1X0AjAxN8

Media 1
๐Ÿ–ผ๏ธ Media
S
saviorstefani
@saviorstefani
๐Ÿ“…
Sep 30, 2023
913d ago
๐Ÿ†”14559262

rachel: โ€œele deturpou o que eu falei pra usar contra mim lรก fora, como se eu fosse uma pessoa preconceituosa, contra as minoriasโ€ VAI FALANDO RACHEL https://t.co/bChtAO2SrY

๐Ÿ–ผ๏ธ Media
N
native_fi
@native_fi
๐Ÿ“…
Feb 07, 2024
783d ago
๐Ÿ†”74032332

๐Ÿ”ฑ Native x @MantaNetwork: Scaling Web3 with ZK Tech! ๐ŸŒŠ We're elated to announce our latest integration w/ Manta, enhancing interoperability through ZK technology! Combining Native's Unified Liquidity & Manta's Modular design, together we're forging a unified crypto ecosystem. https://t.co/wYKvhCIWrx

Media 1
๐Ÿ–ผ๏ธ Media
M
MantaNetwork
@MantaNetwork
๐Ÿ“…
Feb 07, 2024
783d ago
๐Ÿ†”42954763

๐ŸŒŠ @native_fi, Web3's Liquidity Layer, now joins the #MantaPacific Ecosystem! Native is a liquidity solution that combines bridges, assets and pricing into one offering. ๐Ÿ”— https://t.co/zpVlhRqHup

@native_fi โ€ข Wed Feb 07 12:00

๐Ÿ”ฑ Native x @MantaNetwork: Scaling Web3 with ZK Tech! ๐ŸŒŠ We're elated to announce our latest integration w/ Manta, enhancing interoperability through ZK technology! Combining Native's Unified Liquidity & Manta's Modular design, together we're forging a unified crypto ecosyste

Media 1
๐Ÿ–ผ๏ธ Media
S
SpirosMargaris
@SpirosMargaris
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”78490813

From One-Person Companies to Generative Media, AI Funding Spans the Stack https://t.co/VcljFNMZPV @pymnts

Media 1
๐Ÿ–ผ๏ธ Media
G
gr00vyfairy
@gr00vyfairy
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”29125593

I built a reading tracker with Manus 1.6 Max. https://t.co/ZyXiLQIbFE

@ โ€ข

๐Ÿ–ผ๏ธ Media
A
ayami_marketing
@ayami_marketing
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”63644674

ใ‚„ใฐใ„ใ€‚CanvaใŒ่ฆใ‚‰ใชใใชใ‚‹ใ‹ใ‚‚... Manus1.6ใงใ€Nano banana proใฎ็”ปๅƒใŒใƒ†ใ‚ญใ‚นใƒˆ็ทจ้›†ใ‹ใ‚‰่ƒŒๆ™ฏๅ‰Š้™คใพใงๅฎŒ็ตใ™ใ‚‹ใ‚ˆใ†ใซใชใฃใŸใ€‚ https://t.co/hp2RpIJuRd

๐Ÿ–ผ๏ธ Media
๐Ÿ”ivanleomk retweeted
A
ใ‚ใ‚„ใฟ๏ฝœใƒžใƒผใ‚ฑใƒ†ใ‚ฃใƒณใ‚ฐ
@ayami_marketing
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”63644674

ใ‚„ใฐใ„ใ€‚CanvaใŒ่ฆใ‚‰ใชใใชใ‚‹ใ‹ใ‚‚... Manus1.6ใงใ€Nano banana proใฎ็”ปๅƒใŒใƒ†ใ‚ญใ‚นใƒˆ็ทจ้›†ใ‹ใ‚‰่ƒŒๆ™ฏๅ‰Š้™คใพใงๅฎŒ็ตใ™ใ‚‹ใ‚ˆใ†ใซใชใฃใŸใ€‚ https://t.co/hp2RpIJuRd

โค๏ธ164
likes
๐Ÿ”7
retweets
๐Ÿ–ผ๏ธ Media
M
mozarros1
@mozarros1
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”90972210

https://t.co/dapoCilroN

@youwouldntpost โ€ข Tue Dec 16 00:43

fascinated by the trajectory this book took from "forgotten on arrival" to "dustbin of literature" to "rediscovered quiet masterpiece" to "the NYRB edition absolutely everyone has been recommended at some point" to "enough already" to "overrated trash"

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”youwouldntpost retweeted
M
๐Ÿ‰๐Ÿƒ๐Ÿ‚,,
@mozarros1
๐Ÿ“…
Dec 16, 2025
105d ago
๐Ÿ†”90972210

https://t.co/dapoCilroN

Media 1
โค๏ธ2
likes
๐Ÿ”1
retweets
๐Ÿ–ผ๏ธ Media
L
LVTHalo
@LVTHalo
๐Ÿ“…
Jun 02, 2023
1033d ago
๐Ÿ†”66787614

The @NativeGaming team share their emotions after the reverse sweep! https://t.co/p0GKVWjSPB

๐Ÿ–ผ๏ธ Media
R
redstone_defi
@redstone_defi
๐Ÿ“…
Apr 18, 2024
712d ago
๐Ÿ†”63552168

RedStone ๐Ÿค @native_fi RedStone is delighted to provide price feeds to @native_fi in our Core (Pull) Model. Now LST & LRT holders can easily contribute to Nativeโ€™s programmable liquidity Aqua, receiving yield in return. Details below ๐Ÿ‘‡ https://t.co/Eceo0f1sS5

Media 1
๐Ÿ–ผ๏ธ Media
U
UnslothAI
@UnslothAI
๐Ÿ“…
Dec 15, 2025
106d ago
๐Ÿ†”07452746

NVIDIA releases Nemotron 3 Nano, a new 30B hybrid reasoning model! ๐Ÿ”ฅ Nemotron 3 has a 1M context window and the best in class performance for SWE-Bench, reasoning and chat. Run the MoE model locally with 24GB RAM. Guide: https://t.co/UAHCV8dMNC GGUF: https://t.co/XdmG9ZSnNQ https://t.co/XttVvteTqE

Media 1Media 2
๐Ÿ–ผ๏ธ Media
A
AiBreakfast
@AiBreakfast
๐Ÿ“…
Dec 15, 2025
106d ago
๐Ÿ†”21572196

ElevenLabs has officially LOST to Open-Source ResembleAI allows you to clone ANY voice without verification using on 5-10 seconds of audio, and dominates on paralinguistic tags for human-like expressions. Most "fast" text-to-speech models sound robotic. Most "quality" TTS models are slow. None incorporate authentication at a foundational level. @resembleai solved all three. Chatterbox Turbo delivers: ๐ŸŸข<150ms time-to-first-sound ๐ŸŸขState-of-the-art quality that beats larger proprietary models ๐ŸŸขNatural, programmable expressions ๐ŸŸขZero-shot voice cloning with just 5 seconds of audio ๐ŸŸขPerTh watermarking for authenticated and verifiable audio ๐ŸŸขOpen source โ€“ full transparency, no black boxes Try it on HuggingFace: https://t.co/cPXPQyPrRN

Media 2
๐Ÿ–ผ๏ธ Media
V
vllm_project
@vllm_project
๐Ÿ“…
Dec 15, 2025
106d ago
๐Ÿ†”42502335

Multimodal serving pain: vision encoder work can stall text prefill/decode and make tail latency jittery. We built Encoder Disaggregation (EPD) in vLLM: run the encoder as a separate scalable service, pipeline it with prefill/decode, and reuse image embeddings via caching. This provides an efficient and flexible pattern for multimodal serving. Results: consistently higher throughput (5โ€“20% across stable regions) and significant reductions in P99 TTFT and P99 TPOT. Read more: https://t.co/kGjOCuPZy2 #vLLM #LLMInference #Multimodal

Media 1Media 2
+1 more
๐Ÿ–ผ๏ธ Media