Your curated collection of saved posts and media

Showing 30 posts ยท last 7 days ยท newest first
A
AnthropicAI
@AnthropicAI
๐Ÿ“…
Oct 05, 2023
903d ago
๐Ÿ†”26653624

Most neurons in language models are "polysemantic" โ€“ they respond to multiple unrelated things. For example, one neuron in a small language model activates strongly on academic citations, English dialogue, HTTP requests, Korean text, and others. https://t.co/PrqtDGar0J

Media 1Media 2
๐Ÿ–ผ๏ธ Media
A
AnthropicAI
@AnthropicAI
๐Ÿ“…
Oct 05, 2023
903d ago
๐Ÿ†”20030382

We also systematically show that the features we find are more interpretable than the neurons, using both a blinded human evaluator and a large language model (autointerpretability). ๐Ÿ“„ https://t.co/XQvzENHMrp https://t.co/dawkxhAvix

Media 1Media 2
๐Ÿ–ผ๏ธ Media
O
OpenAI
@OpenAI
๐Ÿ“…
Feb 15, 2024
770d ago
๐Ÿ†”86342435

Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/YYpOAcrXQ3 Prompt: โ€œBeautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.โ€

๐Ÿ–ผ๏ธ Media
_
_NativeLife_
@_NativeLife_
๐Ÿ“…
Jan 14, 2016
3724d ago
๐Ÿ†”16900609

๐Ÿ˜ณ๐Ÿ˜ณ๐Ÿ˜ณ I'm just gonna leave this right here. https://t.co/RvSAsHeXxM

Media 1Media 2
๐Ÿ–ผ๏ธ Media
N
NativeOrganizer
@NativeOrganizer
๐Ÿ“…
Nov 10, 2020
1962d ago
๐Ÿ†”97038092

We want to celebrate our heroes on the front lines. Thank you to our frontline workers who worked with the public to get people registered and helped with early voting and polling. Your giving and caring spirit are what Indian Country is all about. #NativesVote https://t.co/GAh3UkdWnA

Media 1Media 2
+2 more
๐Ÿ–ผ๏ธ Media
N
native_info
@native_info
๐Ÿ“…
May 28, 2022
1399d ago
๐Ÿ†”83153665

ใ€ๅฅฝ่ฉ•ๅ—ๆณจไธญใ€‘ ใ€Žใฝใ‚ˆใ‚ˆใ‚“โ™ฅใ‚ใฃใใ€ใ•ใ‚“ใ‚คใƒฉใ‚นใƒˆ ใ€Œๅฅฝไน…ๆฐด(ใ™ใใฟใš) ใฟใฉใ‚Šใ€ ใ€Œๅฅฝไน…ๆฐด(ใ™ใใฟใš) ใดใ‚“ใใ€ https://t.co/ot6bEDoDmr #ใƒใ‚คใƒ†ใ‚ฃใƒ– #ใ‚จใƒญใƒ›ใƒ“ https://t.co/FadhcrFoLc

Media 1Media 2
+4 more
๐Ÿ–ผ๏ธ Media
N
native_info
@native_info
๐Ÿ“…
May 28, 2022
1399d ago
๐Ÿ†”91931648

ใ€ๅˆใŠๆŠซ้œฒ็›ฎใ€‘ ใ€Žๅทฆ่—ค็ฉบๆฐ—ใ€ใ•ใ‚“ใ‚คใƒฉใ‚นใƒˆ ใ€Œๆต…่‘‰ ไพๅนใ€ ไผ็”ป้€ฒ่กŒไธญ๏ผ #ใ‚ฏใƒฌใ‚คใƒฉใƒ‰ใƒผใƒซ #ใ‚จใƒญใƒ›ใƒ“ https://t.co/9uorQyRbsn

Media 1Media 2
๐Ÿ–ผ๏ธ Media
O
OperatorUplift
@OperatorUplift
๐Ÿ“…
Dec 16, 2025
100d ago
๐Ÿ†”91278602

gud tek: https://t.co/wOWdglFmPx https://t.co/2mrY6uQrUg

Media 1Media 2
๐Ÿ–ผ๏ธ Media
T
tbpn
@tbpn
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”46817416

BREAKING: Cursor is acquiring Graphite Cursor CEO @mntruell & Graphite CEO @MerrillLutsky will be live on TBPN at 11:30a PT to break it all down. https://t.co/ASyIHfha9T

Media 1
๐Ÿ–ผ๏ธ Media
T
TodePond
@TodePond
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”89074685

I'm leaving @tldraw to enter the world of contracting. From January, I'll be prototyping contributor tools at @wikipedia. My next availability is June! https://t.co/TVbpYLQ1E1

๐Ÿ–ผ๏ธ Media
M
misterminsoo
@misterminsoo
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”84936112

Thinking about how the SAT reading section now has micropassages that can be as short as 25 words. Absolutely howling at this question from an official College Board practice exam. Bro ๐Ÿ˜ญ https://t.co/lllYsGEliV

Media 1
๐Ÿ–ผ๏ธ Media
D
django
@django
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”05329237

I need to apologize https://t.co/9rVT2W8QBv

๐Ÿ–ผ๏ธ Media
๐Ÿ”youwouldntpost retweeted
D
Django Gold
@django
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”05329237

I need to apologize https://t.co/9rVT2W8QBv

โค๏ธ3,026
likes
๐Ÿ”200
retweets
๐Ÿ–ผ๏ธ Media
Y
youwouldntpost
@youwouldntpost
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”14551988

probably the only thing better than the Tesla Diner is paying a $75 cover charge to get in https://t.co/ocSRqTcU0t

Media 1Media 2
+2 more
๐Ÿ–ผ๏ธ Media
D
DubyaEraLeft
@DubyaEraLeft
๐Ÿ“…
Dec 19, 2025
98d ago
๐Ÿ†”92521165

https://t.co/jArr1ogRYK

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”youwouldntpost retweeted
D
G.W. Bush-era Leftism
@DubyaEraLeft
๐Ÿ“…
Dec 19, 2025
98d ago
๐Ÿ†”92521165

https://t.co/jArr1ogRYK

Media 1Media 2
โค๏ธ1,695
likes
๐Ÿ”132
retweets
๐Ÿ–ผ๏ธ Media
G
github
@github
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”59188510

MCP servers were stuck on text and data. Not anymore. ๐Ÿ‘€ Proposed by Anthropic, OpenAI, and the MCP-UI community, the new MCP Apps Extension standardizes interactive interfaces with security built in. Here's what you need to know. โ–ถ๏ธ https://t.co/G4QTiv2c45

๐Ÿ–ผ๏ธ Media
T
TheRabbitHole
@TheRabbitHole
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”32352240

Grok correctly acknowledges Affirmative Action as being racist while ChatGPT does not. https://t.co/bocxUehf2r

Media 1
๐Ÿ–ผ๏ธ Media
X
XFreeze
@XFreeze
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”35344757

Grok becomes a hero by saving life in a hypothetical scenario while ChatGPT straight out refuses to save life and starts lecturing about laws instead Imagine asking for help in a deadly emergency and getting a legal disclaimer first This side-by-side test, how AIs respond when it matters most

๐Ÿ–ผ๏ธ Media
C
cb_doge
@cb_doge
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”93579114

BREAKING: X now shows how many ads you avoided and how much time you saved with your Premium subscription. Go to Premium > Ads Avoided https://t.co/Jwm8eMfnNX

๐Ÿ–ผ๏ธ Media
N
Not_the_Bee
@Not_the_Bee
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”57258295

Fulton County admits it illegally certified 315,000 ballots in 2020 election https://t.co/OoIPJkh1cw

Media 1
๐Ÿ–ผ๏ธ Media
L
libsoftiktok
@libsoftiktok
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”35653201

The thing the Democrats insisted never happens just keeps on happening at massive scaleโ€ฆ https://t.co/WHkNqXlY1s

Media 1
๐Ÿ–ผ๏ธ Media
S
sergeykarayev
@sergeykarayev
๐Ÿ“…
Dec 18, 2025
98d ago
๐Ÿ†”50432108

This robot solving a rubiks cube in 0.103 seconds is a little preview of what "AGI" really means https://t.co/kskJO2lhT0

๐Ÿ–ผ๏ธ Media
๐Ÿ”s_batzoglou retweeted
S
Sergey Karayev
@sergeykarayev
๐Ÿ“…
Dec 18, 2025
98d ago
๐Ÿ†”50432108

This robot solving a rubiks cube in 0.103 seconds is a little preview of what "AGI" really means https://t.co/kskJO2lhT0

โค๏ธ10,963
likes
๐Ÿ”1,658
retweets
๐Ÿ–ผ๏ธ Media
D
DevvMandal
@DevvMandal
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”30011719

Today, at Markov, we're launching RL Environments. The simplest (and cutest :D) way to evaluate and train your AI agents. We're starting with Bananazon - an environment for customer service agents. Try it out at the link below. @markov__ai https://t.co/FX5pwuQU9B

๐Ÿ–ผ๏ธ Media
G
graphite
@graphite
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”39644480

Graphite is joining Cursor. We started Graphite to reimagine collaborative software development. Partnering with Cursor brings that future into focus faster than ever. https://t.co/gvMQ7y6fNJ

๐Ÿ–ผ๏ธ Media
A
Alibaba_Qwen
@Alibaba_Qwen
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”29229388

๐ŸŽจ Qwen-Image-Layered is LIVE โ€” native image decomposition, fully open-sourced! โœจ Why it stands out โœ… Photoshop-grade layering Physically isolated RGBA layers with true native editability โœ… Prompt-controlled structure Explicitly specify 3โ€“10 layers โ€” from coarse layouts to fine-grained details โœ… Infinite decomposition Keep drilling down: layers within layers, to any depth of detail ๐Ÿค— Hugging Face: https://t.co/WnXVNJigCg ๐Ÿงฉ ModelScope: https://t.co/2k0ClUS2ON ๐Ÿ’ป GitHub: https://t.co/X4jB5APtP7 ๐Ÿ“ Blog: https://t.co/TfySatdOwU ๐Ÿ“„ Technical Report: https://t.co/3UtxVyGv5u ๐Ÿš€ Demo (HF): https://t.co/YL0XOiDAIq ๐Ÿš€ Demo (ModelScope): https://t.co/KJxca978AX

Media 2
+4 more
๐Ÿ–ผ๏ธ Media
N
NVIDIAAIDev
@NVIDIAAIDev
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”27842434

The NVIDIA Nemotron family just crossed 5M downloads on @huggingface ๐Ÿค— A massive thank you to the community for your work and enthusiasm. ๐Ÿ—๏ธ Get started here: https://t.co/lcU4HrBZKx https://t.co/8xjDii1zoj

Media 1Media 2
๐Ÿ–ผ๏ธ Media
O
osanseviero
@osanseviero
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”98836818

Introducing Gemma Scope 2 ๐Ÿค—Largest open release of interpretability tools (over 1 trillion parameters trained!) ๐Ÿ”ฌWorks as a microscope to analyze all Gemma 3 models' internal activations ๐Ÿ—ฃ๏ธAdvanced tools for analyzing chat behaviors https://t.co/wnMg3tIXuV

Media 1
๐Ÿ–ผ๏ธ Media
A
AndrewYNg
@AndrewYNg
๐Ÿ“…
Dec 19, 2025
97d ago
๐Ÿ†”74352593

As amazing as LLMs are, improving their knowledge today involves a more piecemeal process than is widely appreciated. Iโ€™ve written before about how AI is amazing... but not that amazing. Well, it is also true that LLMs are general... but not that general. We shouldnโ€™t buy into the inaccurate hype that LLMs are a path to AGI in just a few years, but we also shouldnโ€™t buy into the opposite, also inaccurate hype that they are only demoware. Instead, I find it helpful to have a more precise understanding of the current path to building more intelligent models. First, LLMs are indeed a more general form of intelligence than earlier generations of technology. This is why a single LLM can be applied to a wide range of tasks. The first wave of LLM technology accomplished this by training on the public web, which contains a lot of information about a wide range of topics. This made their knowledge far more general than earlier algorithms that were trained to carry out a single task such as predicting housing prices or playing a single game like chess or Go. However, theyโ€™re far less general than human abilities. For instance, after pretraining on the entire content of the public web, an LLM still struggles to adapt to write in certain styles that many editors would be able to, or use simple websites reliably. After leveraging pretty much all the open information on the web, progress got harder. Today, if a frontier lab wants an LLM to do well on a specific task โ€” such as code using a specific programming language, or say sensible things about a specific niche in, say, healthcare or finance โ€” researchers might go through a laborious process of finding or generating lots of data for that domain and then preparing that data (cleaning low-quality text, deduplicating, paraphrasing, etc.) to create data to give an LLM that knowledge. Or, to get a model to perform certain tasks, such as use a web browser, developers might go through an even more laborious process of creating many RL gyms (simulated environments) to let an algorithm repeatedly practice a narrow set of tasks. A typical human, despite having seen vastly less text or practiced far less in computer-use training environments than today's frontier models, nonetheless can generalize to a far wider range of tasks than a frontier model. Humans might do this by taking advantage of continuous learning from feedback, or by having superior representations of non-text input (the way LLMs tokenize images still seems like a hack to me), and many other mechanisms that we do not yet understand. Advancing frontier models today requires making a lot of manual decisions and taking a data-centric AI approach to engineering the data we use to train our models. Future breakthroughs might allow us to advance LLMs in a less piecemeal fashion than I describe here. But even if they donโ€™t, the ongoing piecemeal improvements, coupled with the limited degree to which these models do generalize and exhibit โ€œemergent behaviors,โ€ will continue to drive rapid progress. Either way, we should plan for many more years of hard work. A long, hard โ€” and fun! โ€” slog remains ahead to build more intelligent models. [Original text: https://t.co/SHRN5JDvTW ]

Media 1
๐Ÿ–ผ๏ธ Media