Your curated collection of saved posts and media

Showing 32 posts Β· last 14 days Β· by score
πŸ”Scobleizer retweeted
9
9to5Google
@9to5Google
πŸ“…
Dec 12, 2025
111d ago
πŸ†”59641522

Google Translate rolling out live translation using Gemini with any headphones https://t.co/c385eKMBP9 by @technacity

Media 1
❀️92
likes
πŸ”8
retweets
πŸ–ΌοΈ Media
S
Scobleizer
@Scobleizer
πŸ“…
Dec 12, 2025
111d ago
πŸ†”88144831

A look back and forward at social media You might not know, but I've been doing social media before this site started nearly 20 years ago. In fact, the founders of Twitter, came to my social dinner years before they even started Twitter. I was the first to get to 1,000 followers here, because of my then famous blog, Scobleizer, covering innovation. I also wrote a very influential book on the early days of social media, "Naked Conversations." So I keep my eye out for how social media continues to morph and here's one: @Second_MeAI This is a mobile app, a new kind of AI assistant. Immediately it wanted me to clone my voice, which was a little weird, but it highly personalizes the experience to you. In my case it also immediately knew a lot about me (both the downsides and upsides of being a public figure). I've been using it a few days now and it gets better as you use it, which is a new trend I've noticed in other AI contexts too. One thing that caught my attention is that as you talk with it, and get it set up, it can connect you with other users, but it is the AI that talks to the other user's AI. I imagine this pattern will be used in dating contexts a lot. Have your AI check out the other person pretty deeply to see if you have a good chance of getting along. In this case I'm using it to make global friends in a more social context. Especially those with teenagers who they are trying to motivate to do more (a problem I'm having with mine that I spent a lot of time talking with SecondMe's AI about). Anyway, a new kind of social network is here: one that uses AI to build stronger connections with other people, and to make your life better. Try it at: https://t.co/4TL7ttikeZ

πŸ–ΌοΈ Media
πŸ”_akhaliq retweeted
J
Julien Chaumond
@julien_c
πŸ“…
Dec 12, 2025
111d ago
πŸ†”95543524

<3 https://t.co/EjXen8elV2

Media 1Media 2
❀️23
likes
πŸ”4
retweets
πŸ–ΌοΈ Media
A
allen_ai
@allen_ai
πŸ“…
Dec 12, 2025
111d ago
πŸ†”18509316

Olmo 3.1 is here. We extended our strongest RL run and scaled our instruct recipe to 32Bβ€”releasing Olmo 3.1 Think 32B & Olmo 3.1 Instruct 32B, our most capable models yet. 🧡 https://t.co/i8Ia5yGJoI

Media 1Media 2
πŸ–ΌοΈ Media
A
ariG23498
@ariG23498
πŸ“…
Dec 12, 2025
112d ago
πŸ†”11270159

Congrats to all the software that went to space along with this project. @PyTorch @numpy_team @huggingface @OpenAI @wandb https://t.co/ttInMpxyFb

@karpathy β€’ Wed Dec 10 17:25

nanoGPT - the first LLM to train and inference in space πŸ₯Ή. It begins.

Media 1
πŸ–ΌοΈ Media
T
Teknium
@Teknium
πŸ“…
Dec 12, 2025
111d ago
πŸ†”72557358

Very cool project that a lot of people have asked for for a long time, an LLM trained on 90GB of only 1800s and older texts https://t.co/pio0FXssp4 https://t.co/ch1pxIWaHm

Media 1
πŸ–ΌοΈ Media
R
remi_or_
@remi_or_
πŸ“…
Dec 12, 2025
111d ago
πŸ†”25594202

Just opened a PR to make continuous batching in transformers go EVEN fasterπŸš† With simple optimizations like no torch sync and more GPU-sided operations, we gained 10-14.5% throughput across 500 requestsπŸ₯³ Soon, there will be native fast RL training in transformers. Keep up πŸ˜‰ https://t.co/EoaEvhqS3C

Media 1
πŸ–ΌοΈ Media
M
mishig25
@mishig25
πŸ“…
Dec 12, 2025
111d ago
πŸ†”63430848

Using AI to do more AI at HF. We added chatbot on every hf doc page so that one can get answers faster we are using open source embedding models & llms through hugging chat and one of our inference providers to serve answers https://t.co/bnBERGjGTN

Media 1
πŸ–ΌοΈ Media
J
jeffboudier
@jeffboudier
πŸ“…
Dec 12, 2025
111d ago
πŸ†”42353200

"We're seeing that we can take open source models, fine-tune them, and get similar performance to the very best proprietary models at less than 10% of the cost" @williamready - @Pinterest CEO https://t.co/9bHW7Depl2

πŸ–ΌοΈ Media
J
julien_c
@julien_c
πŸ“…
Dec 12, 2025
111d ago
πŸ†”95543524

<3 https://t.co/EjXen8elV2

Media 1
πŸ–ΌοΈ Media
πŸ”huggingface retweeted
J
Julien Chaumond
@julien_c
πŸ“…
Dec 12, 2025
111d ago
πŸ†”95543524

<3 https://t.co/EjXen8elV2

Media 1Media 2
❀️23
likes
πŸ”4
retweets
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 12, 2025
111d ago
πŸ†”90239864

OpenAI just released circuit-sparsity https://t.co/cebKqr7IaQ

Media 1
πŸ–ΌοΈ Media
πŸ”huggingface retweeted
_
AK
@_akhaliq
πŸ“…
Dec 12, 2025
111d ago
πŸ†”90239864

OpenAI just released circuit-sparsity https://t.co/cebKqr7IaQ

Media 1
❀️65
likes
πŸ”9
retweets
πŸ–ΌοΈ Media
C
code_star
@code_star
πŸ“…
Dec 12, 2025
111d ago
πŸ†”75474276

on HF now! go get you some! https://t.co/byVhQFdVxX

@code_star β€’ Fri Dec 12 16:48

We have a fun one for you! Introducing Luxical! Embedding models and fast text models are the workhorses of data curation pipelines. Fast text models are, well fast. Embedding models are more precise, but they really are not designed for the types of things we want to do in web

Media 1
πŸ–ΌοΈ Media
πŸ”huggingface retweeted
C
Cody Blakeney
@code_star
πŸ“…
Dec 12, 2025
111d ago
πŸ†”75474276

on HF now! go get you some! https://t.co/byVhQFdVxX

Media 1Media 2
❀️20
likes
πŸ”5
retweets
πŸ–ΌοΈ Media
L
lukemerrick_
@lukemerrick_
πŸ“…
Dec 12, 2025
111d ago
πŸ†”08375791

Just dropped a new text embedding methodology. Fast as heck on CPU only and still great for document similarity analysis, clustering, and classification. How? Use a tiny ReLU network to approximate a big transformer from lexical (term frequency / bag of words) features. https://t.co/IXfpZCVcgt

Media 1
πŸ–ΌοΈ Media
V
victormustar
@victormustar
πŸ“…
Dec 12, 2025
111d ago
πŸ†”30883251

One of the coolest AI project ever? Training an LLM from scratch using ONLY texts from 1800-1875 London. Goal: create a language model with zero modern bias contamination. a true time capsule πŸ§™β€β™‚οΈ https://t.co/teepdYTfqh

Media 1
πŸ–ΌοΈ Media
A
americalover24
@americalover24
πŸ“…
Dec 12, 2025
112d ago
πŸ†”00750900

@Ivantheboomer @atalovesyou We don’t want to be a minority in our own country https://t.co/s1BkS5eofu

Media 1Media 2
πŸ–ΌοΈ Media
H
hedlike_a_hole
@hedlike_a_hole
πŸ“…
Dec 12, 2025
111d ago
πŸ†”70679544

600 lb squat don’t care https://t.co/4xayMXnJ9g

@sapinker β€’ Fri Dec 12 01:56

Bombshell: Oliver Sacks (a humane man & a fine essayist) made up many of the details in his famous case studies, deluding neuroscientists, psychologists, & general readers for decades. The man who mistook his wife for a hat? The autistic twins who generated multi-digit prime numb

Media 1
πŸ–ΌοΈ Media
πŸ”youwouldntpost retweeted
H
πŸ•³βš°οΈπŸ’¨
@hedlike_a_hole
πŸ“…
Dec 12, 2025
111d ago
πŸ†”70679544

600 lb squat don’t care https://t.co/4xayMXnJ9g

Media 1Media 2
❀️34
likes
πŸ”4
retweets
πŸ–ΌοΈ Media
O
OrganizerMemes
@OrganizerMemes
πŸ“…
Dec 12, 2025
111d ago
πŸ†”70013590

https://t.co/50jJSSq2WT

@bethanyshondark β€’ Thu Dec 11 13:41

This former CBS employee laments Bari Weiss works there now. I tell him anyone who feels the same should quit. Instead of ignoring me or making a good faith argument, he screenshots an old and out of context tweet, and just posts it partially. https://t.co/TPZaoJUXE8

Media 1
πŸ–ΌοΈ Media
πŸ”youwouldntpost retweeted
O
Organizermemes
@OrganizerMemes
πŸ“…
Dec 12, 2025
111d ago
πŸ†”70013590

https://t.co/50jJSSq2WT

Media 1Media 2
❀️82
likes
πŸ”3
retweets
πŸ–ΌοΈ Media
Y
youwouldntpost
@youwouldntpost
πŸ“…
Dec 12, 2025
111d ago
πŸ†”36010017

@poddtadre https://t.co/oV9HfEQd5X

Media 1Media 2
πŸ–ΌοΈ Media
D
dair_ai
@dair_ai
πŸ“…
Dec 12, 2025
111d ago
πŸ†”65305021

Training of Physical Neural Networks Could we train AI models 1000x larger than today's? Could we run them privately on edge devices like smartphones? The answer might be yes, but not with GPUs. This paper suggests that the path forward may require physical neural networks. Physical Neural Networks (PNNs) use properties of physical systems to perform computation. Optical systems, photonics, analog electronics, and even mechanical substrates. Physics can compute certain operations far more efficiently than digital transistors. The problem isn't inference. The problem is training. Backpropagation has powered deep learning's success, but implementing it in physical hardware faces fundamental challenges. Weight transport, gradient communication across layers, and precise knowledge of activation functions. This review maps the landscape of PNN training methods: 1) In-silico training: Create digital twins of physical systems, optimize them computationally, then deploy to hardware. Fast iteration but limited by model fidelity. Fabrication imperfections, misalignments, and detection noise break the digital-physical correspondence. 2) Physics-aware training: Physical system performs forward pass, digital model handles backpropagation. A hybrid approach that mitigates experimental noise while maintaining gradient-based optimization. Successfully demonstrated across optical, mechanical, and electronic systems. 3) Equilibrium Propagation: For energy-based systems that naturally minimize a Lyapunov function. Weight updates use local contrastive rules comparing equilibrium states. Implemented on memristor crossbar arrays with potential energy gains of 4 orders of magnitude versus GPUs. 4) Local learning methods: Avoid global gradient communication entirely. Physical Local Learning uses forward-mode differentiation through physical perturbations. No digital model required. Demonstrated on multimode optical fibers with 10,000+ trainable parameters. The emerging hardware spans optical correlators, photonic integrated circuits, spintronic devices, memristor crossbars, exciton-polariton condensates, and quantum circuits. No method yet scales to backpropagation's performance on digital hardware. But the trajectory is clear: diverse training techniques are converging on practical PNN implementations. As AI scaling hits GPU limits, physical computing offers a path to models orders of magnitude larger and more energy-efficient than what's currently possible. Paper: https://t.co/AiTbVWMZSP Learn to build with LLMs and AI Agents in our academy: https://t.co/zQXQt0PMbG

Media 1
πŸ–ΌοΈ Media
O
omarsar0
@omarsar0
πŸ“…
Dec 12, 2025
111d ago
πŸ†”63903835

Reasoning models now pass all three levels of the CFA exam. In 2023, ChatGPT (GPT-3.5-turbo) failed CFA Levels I and II. GPT-4 passed Level I but failed Level II. LLMs struggles with finance exams requiring numerical precision, qualitative analysis, and ethical judgment simultaneously. That ceiling has been shattered, which speaks to the potential of reasoning models. Researchers evaluated state-of-the-art reasoning models on 980 CFA mock exam questions across all three levels. The results: Gemini 3.0 Pro, Gemini 2.5 Pro, GPT-5, Grok 4, Claude Opus 4.1, and DeepSeek-V3.1 all pass every level. Gemini 3.0 Pro achieves 97.6% on Level I. GPT-5 leads Level II with 94.3%. On Level III constructed-response questions, Gemini 3.0 Pro scores 92.0%. The CFA exam tests an evolving hierarchy of skills. Level I covers foundational knowledge through multiple-choice questions. Level II tests the application through case-based vignettes. Level III requires complex synthesis and portfolio construction with both multiple-choice and constructed-response formats. Quantitative methods, previously a major weakness, now show near-zero error rates for top models. The persistent challenge is Ethics and Professional Standards, where even the best models show 17-21% error rates on Level II. An interesting pattern emerges with prompting. Chain-of-thought reasoning helps baseline models substantially but shows inconsistent effects on reasoning models for multiple-choice questions. However, CoT remains highly effective for constructed-response questions. Gemini 3.0 Pro jumps from 86.6% to 92.0% on CRQs with explicit reasoning prompts. Reasoning models now surpass the expertise required of entry-level to mid-level financial analysts. The question shifts from whether AI can pass professional exams to how these capabilities translate to real-world financial decision-making. Paper: https://t.co/wdwtefM3EN Learn to build effective AI Agents in our academy: https://t.co/JBU5beIoD0

Media 1
πŸ–ΌοΈ Media
A
AravSrinivas
@AravSrinivas
πŸ“…
Dec 12, 2025
111d ago
πŸ†”31341192

Comet Android can debug your code from your phone. Analyzed CI logs, Traced the failure, Figured out a fix, Committed the fix and opened a PR that’s ready to merge https://t.co/lcsuuE7cju

πŸ–ΌοΈ Media
K
kevinafischer
@kevinafischer
πŸ“…
Dec 12, 2025
111d ago
πŸ†”07027420

Launching: OPEN SOULS. Open source framework for creating AI souls Check out the repo, run the examples souls, and most of all, have fun https://t.co/hzQ4TBD0vd

Media 1
πŸ–ΌοΈ Media
C
ClementDelangue
@ClementDelangue
πŸ“…
Dec 12, 2025
111d ago
πŸ†”16698313

Personally feels like we've reached the peak of "Proprietary APIs" and that we're entering a much more balanced world for AI where open-source, training, @huggingface (and other players) will start getting a much bigger share of the attention, usage and revenue. Let's go! https://t.co/nNFntbAmao

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 12, 2025
111d ago
πŸ†”51883823

Apple presents One Layer Is Enough Adapting Pretrained Visual Encoders for Image Generation https://t.co/CGs5cb4M9J

Media 1Media 2
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 12, 2025
111d ago
πŸ†”18994339

discuss: https://t.co/1aUcQ6S8VA

Media 1
πŸ–ΌοΈ Media
C
code
@code
πŸ“…
Dec 12, 2025
111d ago
πŸ†”83234772

Favorite way to use Copilot with April Yoho from GitHub πŸ€–βœ¨ https://t.co/sVg21DwdlT

πŸ–ΌοΈ Media
D
drfeifei
@drfeifei
πŸ“…
Dec 12, 2025
111d ago
πŸ†”16314470

Want to make a big impact in robotic research? Come and work with robots and the smartest students @StanfordSVL! We are hiring a software developer, focusing on simulation for robotics & robotic learning. You'll be working directly with me , @jiajunwu_cs and our amazing students and researchers. Please apply at: https://t.co/jr2hSqMKiP

Media 1
πŸ–ΌοΈ Media