Your curated collection of saved posts and media

Showing 32 posts · last 7 days · newest first
🔁huggingface retweeted
J
Julien Chaumond
@julien_c
📅
Dec 12, 2025
110d ago
🆔95543524

<3 https://t.co/EjXen8elV2

Media 1Media 2
❤️23
likes
🔁4
retweets
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 12, 2025
110d ago
🆔90239864

OpenAI just released circuit-sparsity https://t.co/cebKqr7IaQ

Media 1
🖼️ Media
🔁huggingface retweeted
_
AK
@_akhaliq
📅
Dec 12, 2025
110d ago
🆔90239864

OpenAI just released circuit-sparsity https://t.co/cebKqr7IaQ

Media 1
❤️65
likes
🔁9
retweets
🖼️ Media
C
code_star
@code_star
📅
Dec 12, 2025
110d ago
🆔75474276

on HF now! go get you some! https://t.co/byVhQFdVxX

@code_star • Fri Dec 12 16:48

We have a fun one for you! Introducing Luxical! Embedding models and fast text models are the workhorses of data curation pipelines. Fast text models are, well fast. Embedding models are more precise, but they really are not designed for the types of things we want to do in web

Media 1
🖼️ Media
🔁huggingface retweeted
C
Cody Blakeney
@code_star
📅
Dec 12, 2025
110d ago
🆔75474276

on HF now! go get you some! https://t.co/byVhQFdVxX

Media 1Media 2
❤️20
likes
🔁5
retweets
🖼️ Media
L
lukemerrick_
@lukemerrick_
📅
Dec 12, 2025
110d ago
🆔08375791

Just dropped a new text embedding methodology. Fast as heck on CPU only and still great for document similarity analysis, clustering, and classification. How? Use a tiny ReLU network to approximate a big transformer from lexical (term frequency / bag of words) features. https://t.co/IXfpZCVcgt

Media 1
🖼️ Media
V
victormustar
@victormustar
📅
Dec 12, 2025
110d ago
🆔30883251

One of the coolest AI project ever? Training an LLM from scratch using ONLY texts from 1800-1875 London. Goal: create a language model with zero modern bias contamination. a true time capsule 🧙‍♂️ https://t.co/teepdYTfqh

Media 1
🖼️ Media
A
americalover24
@americalover24
📅
Dec 12, 2025
110d ago
🆔00750900

@Ivantheboomer @atalovesyou We don’t want to be a minority in our own country https://t.co/s1BkS5eofu

Media 1Media 2
🖼️ Media
H
hedlike_a_hole
@hedlike_a_hole
📅
Dec 12, 2025
110d ago
🆔70679544

600 lb squat don’t care https://t.co/4xayMXnJ9g

@sapinker • Fri Dec 12 01:56

Bombshell: Oliver Sacks (a humane man & a fine essayist) made up many of the details in his famous case studies, deluding neuroscientists, psychologists, & general readers for decades. The man who mistook his wife for a hat? The autistic twins who generated multi-digit prime numb

Media 1
🖼️ Media
🔁youwouldntpost retweeted
H
🕳⚰️💨
@hedlike_a_hole
📅
Dec 12, 2025
110d ago
🆔70679544

600 lb squat don’t care https://t.co/4xayMXnJ9g

Media 1Media 2
❤️34
likes
🔁4
retweets
🖼️ Media
O
OrganizerMemes
@OrganizerMemes
📅
Dec 12, 2025
110d ago
🆔70013590

https://t.co/50jJSSq2WT

@bethanyshondark • Thu Dec 11 13:41

This former CBS employee laments Bari Weiss works there now. I tell him anyone who feels the same should quit. Instead of ignoring me or making a good faith argument, he screenshots an old and out of context tweet, and just posts it partially. https://t.co/TPZaoJUXE8

Media 1
🖼️ Media
🔁youwouldntpost retweeted
O
Organizermemes
@OrganizerMemes
📅
Dec 12, 2025
110d ago
🆔70013590

https://t.co/50jJSSq2WT

Media 1Media 2
❤️82
likes
🔁3
retweets
🖼️ Media
Y
youwouldntpost
@youwouldntpost
📅
Dec 12, 2025
110d ago
🆔36010017

@poddtadre https://t.co/oV9HfEQd5X

Media 1Media 2
🖼️ Media
D
dair_ai
@dair_ai
📅
Dec 12, 2025
110d ago
🆔65305021

Training of Physical Neural Networks Could we train AI models 1000x larger than today's? Could we run them privately on edge devices like smartphones? The answer might be yes, but not with GPUs. This paper suggests that the path forward may require physical neural networks. Physical Neural Networks (PNNs) use properties of physical systems to perform computation. Optical systems, photonics, analog electronics, and even mechanical substrates. Physics can compute certain operations far more efficiently than digital transistors. The problem isn't inference. The problem is training. Backpropagation has powered deep learning's success, but implementing it in physical hardware faces fundamental challenges. Weight transport, gradient communication across layers, and precise knowledge of activation functions. This review maps the landscape of PNN training methods: 1) In-silico training: Create digital twins of physical systems, optimize them computationally, then deploy to hardware. Fast iteration but limited by model fidelity. Fabrication imperfections, misalignments, and detection noise break the digital-physical correspondence. 2) Physics-aware training: Physical system performs forward pass, digital model handles backpropagation. A hybrid approach that mitigates experimental noise while maintaining gradient-based optimization. Successfully demonstrated across optical, mechanical, and electronic systems. 3) Equilibrium Propagation: For energy-based systems that naturally minimize a Lyapunov function. Weight updates use local contrastive rules comparing equilibrium states. Implemented on memristor crossbar arrays with potential energy gains of 4 orders of magnitude versus GPUs. 4) Local learning methods: Avoid global gradient communication entirely. Physical Local Learning uses forward-mode differentiation through physical perturbations. No digital model required. Demonstrated on multimode optical fibers with 10,000+ trainable parameters. The emerging hardware spans optical correlators, photonic integrated circuits, spintronic devices, memristor crossbars, exciton-polariton condensates, and quantum circuits. No method yet scales to backpropagation's performance on digital hardware. But the trajectory is clear: diverse training techniques are converging on practical PNN implementations. As AI scaling hits GPU limits, physical computing offers a path to models orders of magnitude larger and more energy-efficient than what's currently possible. Paper: https://t.co/AiTbVWMZSP Learn to build with LLMs and AI Agents in our academy: https://t.co/zQXQt0PMbG

Media 1
🖼️ Media
O
omarsar0
@omarsar0
📅
Dec 12, 2025
110d ago
🆔63903835

Reasoning models now pass all three levels of the CFA exam. In 2023, ChatGPT (GPT-3.5-turbo) failed CFA Levels I and II. GPT-4 passed Level I but failed Level II. LLMs struggles with finance exams requiring numerical precision, qualitative analysis, and ethical judgment simultaneously. That ceiling has been shattered, which speaks to the potential of reasoning models. Researchers evaluated state-of-the-art reasoning models on 980 CFA mock exam questions across all three levels. The results: Gemini 3.0 Pro, Gemini 2.5 Pro, GPT-5, Grok 4, Claude Opus 4.1, and DeepSeek-V3.1 all pass every level. Gemini 3.0 Pro achieves 97.6% on Level I. GPT-5 leads Level II with 94.3%. On Level III constructed-response questions, Gemini 3.0 Pro scores 92.0%. The CFA exam tests an evolving hierarchy of skills. Level I covers foundational knowledge through multiple-choice questions. Level II tests the application through case-based vignettes. Level III requires complex synthesis and portfolio construction with both multiple-choice and constructed-response formats. Quantitative methods, previously a major weakness, now show near-zero error rates for top models. The persistent challenge is Ethics and Professional Standards, where even the best models show 17-21% error rates on Level II. An interesting pattern emerges with prompting. Chain-of-thought reasoning helps baseline models substantially but shows inconsistent effects on reasoning models for multiple-choice questions. However, CoT remains highly effective for constructed-response questions. Gemini 3.0 Pro jumps from 86.6% to 92.0% on CRQs with explicit reasoning prompts. Reasoning models now surpass the expertise required of entry-level to mid-level financial analysts. The question shifts from whether AI can pass professional exams to how these capabilities translate to real-world financial decision-making. Paper: https://t.co/wdwtefM3EN Learn to build effective AI Agents in our academy: https://t.co/JBU5beIoD0

Media 1
🖼️ Media
A
AravSrinivas
@AravSrinivas
📅
Dec 12, 2025
110d ago
🆔31341192

Comet Android can debug your code from your phone. Analyzed CI logs, Traced the failure, Figured out a fix, Committed the fix and opened a PR that’s ready to merge https://t.co/lcsuuE7cju

🖼️ Media
K
kevinafischer
@kevinafischer
📅
Dec 12, 2025
110d ago
🆔07027420

Launching: OPEN SOULS. Open source framework for creating AI souls Check out the repo, run the examples souls, and most of all, have fun https://t.co/hzQ4TBD0vd

Media 1
🖼️ Media
C
ClementDelangue
@ClementDelangue
📅
Dec 12, 2025
110d ago
🆔16698313

Personally feels like we've reached the peak of "Proprietary APIs" and that we're entering a much more balanced world for AI where open-source, training, @huggingface (and other players) will start getting a much bigger share of the attention, usage and revenue. Let's go! https://t.co/nNFntbAmao

Media 1
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 12, 2025
110d ago
🆔51883823

Apple presents One Layer Is Enough Adapting Pretrained Visual Encoders for Image Generation https://t.co/CGs5cb4M9J

Media 1Media 2
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 12, 2025
110d ago
🆔18994339

discuss: https://t.co/1aUcQ6S8VA

Media 1
🖼️ Media
C
code
@code
📅
Dec 12, 2025
110d ago
🆔83234772

Favorite way to use Copilot with April Yoho from GitHub 🤖✨ https://t.co/sVg21DwdlT

🖼️ Media
D
drfeifei
@drfeifei
📅
Dec 12, 2025
110d ago
🆔16314470

Want to make a big impact in robotic research? Come and work with robots and the smartest students @StanfordSVL! We are hiring a software developer, focusing on simulation for robotics & robotic learning. You'll be working directly with me , @jiajunwu_cs and our amazing students and researchers. Please apply at: https://t.co/jr2hSqMKiP

Media 1
🖼️ Media
M
mydeiinui
@mydeiinui
📅
Dec 06, 2025
117d ago
🆔32838236

“every year, in every life, i will remember the warmth in your embrace on this special day.” © jokiruru #amodei #soulbondtwt #yumetwt https://t.co/je0oraObXu

Media 1Media 2
🖼️ Media
M
mydeiinui
@mydeiinui
📅
Dec 06, 2025
116d ago
🆔74342250

present from @LanternRites ..🥹🥹🥹🥹🥹🥹I LOVE YOU SO MUCH WAAAAA THE AMODEI EVER https://t.co/BAVgZe4D2U

Media 1Media 2
🖼️ Media
T
ThenativePanda
@ThenativePanda
📅
Dec 10, 2025
113d ago
🆔03385603

ฝากทุกคนช่วยกันกด request community note หน่อยนะคะ แอคมันใหญ่จนไม่ปลิวแน่ๆ เลยคิดว่าอย่างน้อยให้มันขึ้น community note เฟคนีวก็ยังดี กดตามรูปนี้ได้เลย แล้วตรงลิ้งสามารถก้อปลิ้งที่เราโควทไปใส่ได้เลย หรือถ้าใครมีทวิตที่เค้ายันเรื่องว่ามันไม่ใชอามี่ก็เอาไปแปะได้ หรือแปะในมชก็ได้ https://t.co/lX3XV7ZfZD

@ThenativePanda • Wed Dec 10 07:03

There is currently no verified evidence that the truck was sent by ARMY. This claim is unverified and may mislead readers. The situation also leaves open the possibility of impersonation by individuals seeking to harm the artist or the group. @BIGHIT_MUSIC

Media 1Media 2
+2 more
🖼️ Media
L
lockwoodsea
@lockwoodsea
📅
Dec 10, 2025
112d ago
🆔43485396

Native InstrumentsのAbsynth 6がリリースされたけど、 ビンテージシンセを調べてたら、 Native Instruments の RETRO MACHINES MK2が、 ちょっと気になった プリセットのフォルダ(カテゴリ)の実機を、 Chat GPTにまとめてもらった 参考までに #nativeinstruments #Retromaschines https://t.co/itL099XWbt

Media 1Media 2
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 12, 2025
110d ago
🆔39005405

Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation https://t.co/jdm6fDmOCI

Media 1Media 2
🖼️ Media
_
_akhaliq
@_akhaliq
📅
Dec 12, 2025
110d ago
🆔88370509

discuss: https://t.co/5OzgF5ytyy

Media 1
🖼️ Media
A
AdinaYakup
@AdinaYakup
📅
Dec 12, 2025
110d ago
🆔51786692

Dolphin-v2 🐬 new document parsing model released by @ByteDanceOSS ✨ 3B - MIT license ✨ Works on any document: PDFs, scans, photos ✨ Understands 21 types of content: text, tables, code, formulas, figures & more ✨ Pixel-level precision via absolute coordinate prediction https://t.co/aLuNxUAs0k

Media 1
🖼️ Media
V
victormustar
@victormustar
📅
Dec 12, 2025
110d ago
🆔10263256

🎉 llama.cpp now has Ollama-style model management. • Auto-discover GGUFs from cache • Load on first request • Each model runs in its own process • Route by `model` (OpenAI-compatible API) • LRU unload at `--models-max` https://t.co/yfmfHL7zzj

Media 1
🖼️ Media
F
fchollet
@fchollet
📅
Dec 12, 2025
110d ago
🆔41459843

Fluid intelligence as measured by ARC 1 & 2 is your ability to turn information into a model that will generalize. That's not the only thing you need to make an intelligent agent. To start with, when you're an agent in the real world, information is not provided to you, passively. You have to go get it. That's "exploration": the agent's ability to efficiently acquire useful information (to turn into a world model) by interacting with its environment. Next, in the real world, you aren't provided instructions. There's no fixed goal. You have to figure what to do. That's "goal-setting": the ability to identify interesting or desirable future world states, via your intrinsic and extrinsic drives. This is a core part of being autonomous. Finally, "planning" represents the ability to accurately and efficiently map out and execute an action path from the current state to the desired goal, including the ability to course correct. That is also different from the ability to turn information into a model -- it's an application of having a model. All of these problems are still largely open. They're all much easier than solving fluid intelligence, in my opinion. Among them, the hardest one is exploration and the easiest one is planning.

@wendyweeww • Fri Dec 12 06:21

@fchollet What type of intelligence is needed for “exploration, goal-setting, and interactive planning”? What is “beyond fluid intelligence”?

Media 1
🖼️ Media
R
rajiinio
@rajiinio
📅
Dec 12, 2025
111d ago
🆔06861418

US CAISI is hiring -- the internal govt name is "IT Specialist" but it is effectively a research scientist role! Salary is $120,579 to - $195,200 per year & you work on AI evaluation within government agencies! Dream job for the right person. Details: https://t.co/HCZWEgqHex https://t.co/D9nReCZx6X

Media 1Media 2
🖼️ Media
← PreviousPage 298 of 560Next →