Your curated collection of saved posts and media

Showing 32 posts Β· last 7 days Β· newest first
V
victormustar
@victormustar
πŸ“…
Dec 16, 2025
104d ago
πŸ†”74827388

YΜΆoΜΆuΜΆ ΜΆcΜΆaΜΆnΜΆ Claude can just do things. Connect it to HF ZeroGPU tools: Chatterbox Turbo, Z Image Turbo, or any MCP-compatible Spaces and watch it create autonomously :) https://t.co/medN75idAs

@victormustar β€’ Wed Jun 18 10:36

Hugging Face PRO is the wildest $9/month deal in AI right now🀯 πŸ”Ή 25 min/day of H200 compute on Spaces ZeroGPU πŸ”Ή ~1M free inference tokens from 15+ providers (Groq, Cerebras, etc.) πŸ”Ή 1TB private storage and more πŸ”Ή More cool things... https://t.co/xGnZwLMEhH

πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 16, 2025
104d ago
πŸ†”09345715

LongCat-Video-Avatar is out https://t.co/2kxi9UQ1Wm

Media 1
πŸ–ΌοΈ Media
I
ivanleomk
@ivanleomk
πŸ“…
Dec 16, 2025
104d ago
πŸ†”82277957

ahahahahha deploying on modal with manus ahahahhahaaahahahhahahahahahahaha https://t.co/Lx4h6evdn5

Media 1
πŸ–ΌοΈ Media
X
XFreeze
@XFreeze
πŸ“…
Dec 16, 2025
104d ago
πŸ†”99708249

The Boring Company is quietly expanding the futuristic Vegas Loop underground Vegas Loop already feels like a cheat code: hop into a Tesla (or Cybertruck) underground and bypass the chaotic surface traffic As of December 2025: β†’ 8 stations live (LVCC campus + Resorts World, Westgate, Encore) β†’ ~3.5 miles of operational tunnels connecting the Convention Center to nearby resorts β†’ Over 3 million rides completed (boosted by huge events like SEMA & Cowboy Christmas) β†’ Turns 30–45 min surface trips into quick 2–8 minute tunnel rides β†’ Free inside LVCC, ~$5–10 for resort connections β†’ Cybertrucks deployed on routes like LVCC–Encore β†’ Full Self-Driving tests underway – driverless rides coming soon And this is just the start... @boringcompany is approved for 68 miles of tunnels and 104 stations, connecting the airport (Phase 1 targeting Q1 2026!), downtown, Allegiant Stadium, UNLV and more – with 2–8 min rides and up to ~90,000 passengers/hour at full build-out via a dense autonomous EV network This isn’t just congestion relief – it’s the future of smarter, greener urban transport 🌍

Media 1
πŸ–ΌοΈ Media
_
_NativeLife_
@_NativeLife_
πŸ“…
Jan 21, 2016
3721d ago
πŸ†”15152384

This is what happens when natives get represented from an outside source. πŸ€” https://t.co/otjWIdTYVe

Media 1
πŸ–ΌοΈ Media
_
_NativeLife_
@_NativeLife_
πŸ“…
May 06, 2016
3615d ago
πŸ†”30686208

Idc if he isn't native, he supports. & that's all that matters. https://t.co/RnuhLG0tMp

Media 1
πŸ–ΌοΈ Media
_
_NativeLife_
@_NativeLife_
πŸ“…
Oct 22, 2017
3081d ago
πŸ†”56616197

Little louder, for those in the back! πŸ‘ŒπŸ½ https://t.co/8qY3GRHAl6

Media 1
πŸ–ΌοΈ Media
M
michaelduron
@michaelduron
πŸ“…
Jan 28, 2018
2983d ago
πŸ†”32953857

Day 17 & I had to save the best for last! That is, the best there is, best there was & the best there ever will be! @natbynature always one of my biggest supporters! Can't help but cheer our friend The Queen of Hearts, Cat lover & all around awesome Superstar! @WWE #wwe #wweart https://t.co/cxuFXkp3k9

Media 1
πŸ–ΌοΈ Media
M
mjyharris
@mjyharris
πŸ“…
Apr 24, 2020
2166d ago
πŸ†”06398464

The same qualities that we look for in humans we also look for in horses....dependable, fearless, brave, honest, straightforward. Native River is all of these. https://t.co/Amfa78o2QG

Media 1
πŸ–ΌοΈ Media
M
mjyharris
@mjyharris
πŸ“…
Jan 20, 2021
1895d ago
πŸ†”62599170

Native River is a relentless galloper. He always digs deep and you will never see him waving the white flag. Sometimes we describe these horses as 'warriors' because of their battling qualities, so it is nice to see him at rest, with the kindest of looks, and the kindest of eyes. https://t.co/V0Z6gZ1i1b

Media 1
πŸ–ΌοΈ Media
M
mjyharris
@mjyharris
πŸ“…
Apr 13, 2022
1447d ago
πŸ†”80142092

NATIVE TRAIL "The eyes tell more than words could ever say." https://t.co/pLoTyju25P

Media 1
πŸ–ΌοΈ Media
N
NI_News
@NI_News
πŸ“…
Jan 03, 2023
1182d ago
πŸ†”72495362

Started any new tracks yet? #musicproducer 😎 πŸ“Έ - michaelkleinberlin https://t.co/W1R35TkmUi

Media 1
πŸ–ΌοΈ Media
E
ewoestmike
@ewoestmike
πŸ“…
Jul 30, 2023
974d ago
πŸ†”41830657

🌟 NASP Agenda! 🌟 Explore our vision for robust Tribal Nations & honoring traditions. Thriving economy, energy independence, & heritage-rich education. Paving a brighter future. Stand with us: sovereignty & preservation! πŸͺΆ https://t.co/18Jw6x26hg #NativeSovereignty https://t.co/YkDpkUa07C

Media 1
πŸ–ΌοΈ Media
N
NativeSnap
@NativeSnap
πŸ“…
Jun 29, 2024
639d ago
πŸ†”03900314

This is pretty much rainworld's story right? #rainworld #rainworldfanart https://t.co/LUZ7yAwhTX

Media 1
πŸ–ΌοΈ Media
T
tsheko2020
@tsheko2020
πŸ“…
Jul 22, 2020
2077d ago
πŸ†”71392001

#gwedemantashe I speak life in every living person affected by this covid_19 pendamic regardless of position or circumstances. That person is a valuable member to his or her family Just like you are. https://t.co/UGC5Wg5DDq

Media 1
πŸ–ΌοΈ Media
N
Native3rd
@Native3rd
πŸ“…
Mar 26, 2021
1831d ago
πŸ†”13661699

β€œThe way we think about ourselves will give rise to the world we live in.” <>G Braden, Cherokee https://t.co/bC5mjm6R5h

Media 1
πŸ–ΌοΈ Media
N
Native3rd
@Native3rd
πŸ“…
Jul 23, 2021
1711d ago
πŸ†”57080070

β€œGive me strength, not to be better than my enemies, but to defeat my greatest enemy, the doubts within myself. Give me strength for a straight back and clear eyes, so when life fades, as the setting sun, my spirit may come to you without shame” -Cherokee https://t.co/ZNFo689DEC

Media 1
πŸ–ΌοΈ Media
N
nativedude8
@nativedude8
πŸ“…
Oct 10, 2021
1632d ago
πŸ†”34642436

Anybody else enjoy soft native cock? 😍🀀 https://t.co/EPcM306JfG

Media 1
πŸ–ΌοΈ Media
N
Nativecherokke
@Nativecherokke
πŸ“…
Jan 04, 2024
816d ago
πŸ†”88028480

To day My birthday I know I'm not perfectπŸ₯² https://t.co/vuFgbBsYZi

Media 1
πŸ–ΌοΈ Media
T
tsheko2020
@tsheko2020
πŸ“…
Jan 08, 2024
812d ago
πŸ†”76401289

#UmkhokhaTheCurse Cannot pretend anymore i miss old Nobuntu 😭😭😭😭 https://t.co/UCP32rY6h0

Media 1
πŸ–ΌοΈ Media
N
Nativecherokke
@Nativecherokke
πŸ“…
Feb 03, 2024
786d ago
πŸ†”13527713

If you support Native American culture Say......❝Yes❞ https://t.co/lxPlpWghR5

Media 1
πŸ–ΌοΈ Media
N
nativelimpact
@nativelimpact
πŸ“…
Feb 25, 2024
764d ago
πŸ†”73220561

beat the getting unblockabled allegations https://t.co/ArhFsx3aHf

πŸ–ΌοΈ Media
O
omarsar0
@omarsar0
πŸ“…
Dec 16, 2025
104d ago
πŸ†”49273303

New benchmark from Google Research. Models get better at benchmarks, but do they actually get more factual? Previous evaluations focused on narrow slices: grounding to documents, answering from memory, or using search. A model excelling at one often fails at another. This new research introduces the FACTS Leaderboard, a comprehensive suite that measures factuality across four distinct dimensions: - FACTS Multimodal tests visual grounding combined with world knowledge on ~1,500 image-based questions. - FACTS Parametric assesses closed-book factoid recall using 2,104 adversarially-sampled questions that stumped open-weight models. - FACTS Search evaluates information-seeking with web tools across 1,884 queries including multi-hop reasoning. - FACTS Grounding v2 tests whether long-form responses stay faithful to provided documents. The aggregate FACTS Score averages performance across all four. Results: Gemini 3 Pro leads with 68.8% overall. Gemini 2.5 Pro follows at 62.1%, then GPT-5 at 61.8%. But the sub-scores tell a different story. Claude models are precision-oriented, achieving high no-contradiction rates but hedging frequently on parametric questions. Claude 4 Sonnet doesn't attempt 45.1% of parametric queries. GPT models show higher coverage but more contradictions. On multimodal, even the best models only reach ~47% accuracy when requiring both complete coverage and zero contradictions. On parametric knowledge, the spread is enormous: Gemini 3 Pro hits 76.4% while GPT-5 mini manages just 16.0%. The benchmark maintains both public and private splits to prevent overfitting. All evaluation runs through Kaggle with standardized search tools for fair comparison. A single factuality number hides crucial behavioral differences. Some models guess aggressively, others hedge conservatively. This suite exposes those tradeoffs across the contexts where factuality actually matters. Paper: https://t.co/TCHOSGlQKs Learn how to evaluate and build effective AI agents in our academy: https://t.co/JBU5beIoD0

Media 1
πŸ–ΌοΈ Media
A
anirudhg9119
@anirudhg9119
πŸ“…
Dec 15, 2025
105d ago
πŸ†”85089155

Best possible task accuracy after fixing constraints on inference process? Tokens across all generations, depth of generation chain (β€œlatency”), total context length and compute. PDR achieves pareto frontier: "draft in parallel β†’ distill to a compact workspace β†’ refine". https://t.co/jWoGu8EVdj

@dair_ai β€’ Mon Dec 15 14:59

NEW Research from Meta Superintelligence Labs and collaborators. The default approach to improving LLM reasoning today remains extending chain-of-thought sequences. Longer reasoning traces aren't always better. Longer traces conflate reasoning depth with sequence length and inh

Media 1
πŸ–ΌοΈ Media
D
dair_ai
@dair_ai
πŸ“…
Dec 16, 2025
104d ago
πŸ†”39487018

NEW research on abstract reasoning. Frontier models like GPT-5 and Grok 4 still can't do what humans find trivially easy: infer transformation rules from a handful of examples. The default approach to solving ARC-AGI (the leading benchmark for abstract reasoning) treats these visual puzzles as pure text. Nested lists like [[0,1,2],[3,4,5]]. But that contradicts how humans actually solve these puzzles. This new research introduces Vision-Language Synergy Reasoning (VLSR), a framework that strategically combines visual and textual modalities for different reasoning stages. Vision and text have complementary strengths. Vision excels at global pattern recognition, providing a 3.0% improvement in rule summarization through holistic 2D perception. Text excels at precise execution, with vision causing a 20.5% performance drop on element-wise manipulation tasks. VLSR decomposes the problem accordingly. Phase 1: visualize example matrices as color-coded grids for rule summarization. Phase 2: switch to text for precise rule application. This is about matching the modality to the task. They also introduce Modality-Switch Self-Correction (MSSC), which breaks the confirmation bias that plagues text-only self-correction. After generating an answer textually, the system verifies it visually. Results across GPT-4o, Gemini-2.5-Pro, o4-mini, and Qwen3-VL: up to 7.25% improvement on Gemini, 4.5% on o4-mini over text-only baselines. Text-only self-correction often degrades performance across rounds. MSSC improves consistently at each iteration. The approach extends to fine-tuning. Vision-language synergy training achieves 13.25% on ARC-AGI with Qwen3-8B, outperforming text-only fine-tuning (9.75%) and closed-source baseline GPT-4o (8.25%) with a much smaller model. Abstract reasoning may require coordinated visual and linguistic processing, not either modality alone. This work shows that matching the modality to the reasoning stage, rather than forcing everything through text, unlocks consistent gains across models. Paper: https://t.co/cQZDUGCmjz Learn to build effective AI agents in our academy: https://t.co/zQXQt0PMbG

Media 1Media 2
πŸ–ΌοΈ Media
T
travelingflying
@travelingflying
πŸ“…
Dec 15, 2025
105d ago
πŸ†”23821884

The media is extremely racist against White people. Now, replace the word β€œWhite” with β€œBlack” and imagine the outrage. This is not okay. Anti-White racism has to stop. https://t.co/E3NWW5KBAl

πŸ–ΌοΈ Media
G
gerardsans
@gerardsans
πŸ“…
Dec 16, 2025
104d ago
πŸ†”63951984

Unpopular opinion: Gradient descent and sampling won’t reach AGI. Meta’s PDR paper quiet admissions behind shiny evals: The workspace is not persistent. Long-context failure modes. β€œManual” steering. Why is the transformer failing at every turn? Read: https://t.co/T4ib3pgCVg https://t.co/WlRx4Lo5D5

Media 1
πŸ–ΌοΈ Media
S
SwayStar123
@SwayStar123
πŸ“…
Dec 16, 2025
104d ago
πŸ†”09320918

Speedrunning ImageNet Diffusion - 360x faster training There have been many new techniques demonstrating convergence speedups compared to DiT in the past few years, however all of these have been studied in isolation, against increasingly outdated baselines. I present SR-DiT (SpeedrunDiT), which combines some of the best techniques into one new modern baseline

Media 1
πŸ–ΌοΈ Media
R
RachelTrue
@RachelTrue
πŸ“…
Jan 27, 2019
2620d ago
πŸ†”98286081

This is not about self absorption.. it’s about racism, parity & πŸ’° speaking up is costing me $ but may help younger folk coming up down the line achieve those things. https://t.co/oBVQkGslpZ

Media 1
πŸ–ΌοΈ Media
N
NPR
@NPR
πŸ“…
Feb 08, 2019
2607d ago
πŸ†”82426624

Are you a person of color who's encountered racist caricatures while you were in school β€” either recently or in the past? @NPRWeekend wants to hear from you. https://t.co/QIxQmqnSG6

Media 1
πŸ–ΌοΈ Media
R
rockstxrmingi
@rockstxrmingi
πŸ“…
Oct 25, 2020
1982d ago
πŸ†”34866433

reject embrace modernity tradition https://t.co/FSxD18Qd1r

Media 1Media 2
πŸ–ΌοΈ Media
R
rafagrassetti
@rafagrassetti
πŸ“…
May 18, 2021
1777d ago
πŸ†”98731010

#NativelyDigital Original: Copy: https://t.co/dTYgIuSSsQ

Media 1Media 2
πŸ–ΌοΈ Media