Your curated collection of saved posts and media
Google Translate rolling out live translation using Gemini with any headphones https://t.co/c385eKMBP9 by @technacity
A look back and forward at social media You might not know, but I've been doing social media before this site started nearly 20 years ago. In fact, the founders of Twitter, came to my social dinner years before they even started Twitter. I was the first to get to 1,000 followers here, because of my then famous blog, Scobleizer, covering innovation. I also wrote a very influential book on the early days of social media, "Naked Conversations." So I keep my eye out for how social media continues to morph and here's one: @Second_MeAI This is a mobile app, a new kind of AI assistant. Immediately it wanted me to clone my voice, which was a little weird, but it highly personalizes the experience to you. In my case it also immediately knew a lot about me (both the downsides and upsides of being a public figure). I've been using it a few days now and it gets better as you use it, which is a new trend I've noticed in other AI contexts too. One thing that caught my attention is that as you talk with it, and get it set up, it can connect you with other users, but it is the AI that talks to the other user's AI. I imagine this pattern will be used in dating contexts a lot. Have your AI check out the other person pretty deeply to see if you have a good chance of getting along. In this case I'm using it to make global friends in a more social context. Especially those with teenagers who they are trying to motivate to do more (a problem I'm having with mine that I spent a lot of time talking with SecondMe's AI about). Anyway, a new kind of social network is here: one that uses AI to build stronger connections with other people, and to make your life better. Try it at: https://t.co/4TL7ttikeZ
<3 https://t.co/EjXen8elV2

Olmo 3.1 is here. We extended our strongest RL run and scaled our instruct recipe to 32Bβreleasing Olmo 3.1 Think 32B & Olmo 3.1 Instruct 32B, our most capable models yet. π§΅ https://t.co/i8Ia5yGJoI

Congrats to all the software that went to space along with this project. @PyTorch @numpy_team @huggingface @OpenAI @wandb https://t.co/ttInMpxyFb
nanoGPT - the first LLM to train and inference in space π₯Ή. It begins.
Very cool project that a lot of people have asked for for a long time, an LLM trained on 90GB of only 1800s and older texts https://t.co/pio0FXssp4 https://t.co/ch1pxIWaHm
Just opened a PR to make continuous batching in transformers go EVEN fasterπ With simple optimizations like no torch sync and more GPU-sided operations, we gained 10-14.5% throughput across 500 requestsπ₯³ Soon, there will be native fast RL training in transformers. Keep up π https://t.co/EoaEvhqS3C
Using AI to do more AI at HF. We added chatbot on every hf doc page so that one can get answers faster we are using open source embedding models & llms through hugging chat and one of our inference providers to serve answers https://t.co/bnBERGjGTN
"We're seeing that we can take open source models, fine-tune them, and get similar performance to the very best proprietary models at less than 10% of the cost" @williamready - @Pinterest CEO https://t.co/9bHW7Depl2
<3 https://t.co/EjXen8elV2
<3 https://t.co/EjXen8elV2

OpenAI just released circuit-sparsity https://t.co/cebKqr7IaQ
OpenAI just released circuit-sparsity https://t.co/cebKqr7IaQ
on HF now! go get you some! https://t.co/byVhQFdVxX
We have a fun one for you! Introducing Luxical! Embedding models and fast text models are the workhorses of data curation pipelines. Fast text models are, well fast. Embedding models are more precise, but they really are not designed for the types of things we want to do in web
on HF now! go get you some! https://t.co/byVhQFdVxX

Just dropped a new text embedding methodology. Fast as heck on CPU only and still great for document similarity analysis, clustering, and classification. How? Use a tiny ReLU network to approximate a big transformer from lexical (term frequency / bag of words) features. https://t.co/IXfpZCVcgt
One of the coolest AI project ever? Training an LLM from scratch using ONLY texts from 1800-1875 London. Goal: create a language model with zero modern bias contamination. a true time capsule π§ββοΈ https://t.co/teepdYTfqh
@Ivantheboomer @atalovesyou We donβt want to be a minority in our own country https://t.co/s1BkS5eofu

600 lb squat donβt care https://t.co/4xayMXnJ9g
Bombshell: Oliver Sacks (a humane man & a fine essayist) made up many of the details in his famous case studies, deluding neuroscientists, psychologists, & general readers for decades. The man who mistook his wife for a hat? The autistic twins who generated multi-digit prime numb
600 lb squat donβt care https://t.co/4xayMXnJ9g

https://t.co/50jJSSq2WT
This former CBS employee laments Bari Weiss works there now. I tell him anyone who feels the same should quit. Instead of ignoring me or making a good faith argument, he screenshots an old and out of context tweet, and just posts it partially. https://t.co/TPZaoJUXE8
https://t.co/50jJSSq2WT

@poddtadre https://t.co/oV9HfEQd5X

Training of Physical Neural Networks Could we train AI models 1000x larger than today's? Could we run them privately on edge devices like smartphones? The answer might be yes, but not with GPUs. This paper suggests that the path forward may require physical neural networks. Physical Neural Networks (PNNs) use properties of physical systems to perform computation. Optical systems, photonics, analog electronics, and even mechanical substrates. Physics can compute certain operations far more efficiently than digital transistors. The problem isn't inference. The problem is training. Backpropagation has powered deep learning's success, but implementing it in physical hardware faces fundamental challenges. Weight transport, gradient communication across layers, and precise knowledge of activation functions. This review maps the landscape of PNN training methods: 1) In-silico training: Create digital twins of physical systems, optimize them computationally, then deploy to hardware. Fast iteration but limited by model fidelity. Fabrication imperfections, misalignments, and detection noise break the digital-physical correspondence. 2) Physics-aware training: Physical system performs forward pass, digital model handles backpropagation. A hybrid approach that mitigates experimental noise while maintaining gradient-based optimization. Successfully demonstrated across optical, mechanical, and electronic systems. 3) Equilibrium Propagation: For energy-based systems that naturally minimize a Lyapunov function. Weight updates use local contrastive rules comparing equilibrium states. Implemented on memristor crossbar arrays with potential energy gains of 4 orders of magnitude versus GPUs. 4) Local learning methods: Avoid global gradient communication entirely. Physical Local Learning uses forward-mode differentiation through physical perturbations. No digital model required. Demonstrated on multimode optical fibers with 10,000+ trainable parameters. The emerging hardware spans optical correlators, photonic integrated circuits, spintronic devices, memristor crossbars, exciton-polariton condensates, and quantum circuits. No method yet scales to backpropagation's performance on digital hardware. But the trajectory is clear: diverse training techniques are converging on practical PNN implementations. As AI scaling hits GPU limits, physical computing offers a path to models orders of magnitude larger and more energy-efficient than what's currently possible. Paper: https://t.co/AiTbVWMZSP Learn to build with LLMs and AI Agents in our academy: https://t.co/zQXQt0PMbG
Reasoning models now pass all three levels of the CFA exam. In 2023, ChatGPT (GPT-3.5-turbo) failed CFA Levels I and II. GPT-4 passed Level I but failed Level II. LLMs struggles with finance exams requiring numerical precision, qualitative analysis, and ethical judgment simultaneously. That ceiling has been shattered, which speaks to the potential of reasoning models. Researchers evaluated state-of-the-art reasoning models on 980 CFA mock exam questions across all three levels. The results: Gemini 3.0 Pro, Gemini 2.5 Pro, GPT-5, Grok 4, Claude Opus 4.1, and DeepSeek-V3.1 all pass every level. Gemini 3.0 Pro achieves 97.6% on Level I. GPT-5 leads Level II with 94.3%. On Level III constructed-response questions, Gemini 3.0 Pro scores 92.0%. The CFA exam tests an evolving hierarchy of skills. Level I covers foundational knowledge through multiple-choice questions. Level II tests the application through case-based vignettes. Level III requires complex synthesis and portfolio construction with both multiple-choice and constructed-response formats. Quantitative methods, previously a major weakness, now show near-zero error rates for top models. The persistent challenge is Ethics and Professional Standards, where even the best models show 17-21% error rates on Level II. An interesting pattern emerges with prompting. Chain-of-thought reasoning helps baseline models substantially but shows inconsistent effects on reasoning models for multiple-choice questions. However, CoT remains highly effective for constructed-response questions. Gemini 3.0 Pro jumps from 86.6% to 92.0% on CRQs with explicit reasoning prompts. Reasoning models now surpass the expertise required of entry-level to mid-level financial analysts. The question shifts from whether AI can pass professional exams to how these capabilities translate to real-world financial decision-making. Paper: https://t.co/wdwtefM3EN Learn to build effective AI Agents in our academy: https://t.co/JBU5beIoD0
Comet Android can debug your code from your phone. Analyzed CI logs, Traced the failure, Figured out a fix, Committed the fix and opened a PR thatβs ready to merge https://t.co/lcsuuE7cju
Launching: OPEN SOULS. Open source framework for creating AI souls Check out the repo, run the examples souls, and most of all, have fun https://t.co/hzQ4TBD0vd
Personally feels like we've reached the peak of "Proprietary APIs" and that we're entering a much more balanced world for AI where open-source, training, @huggingface (and other players) will start getting a much bigger share of the attention, usage and revenue. Let's go! https://t.co/nNFntbAmao
Apple presents One Layer Is Enough Adapting Pretrained Visual Encoders for Image Generation https://t.co/CGs5cb4M9J

discuss: https://t.co/1aUcQ6S8VA
Favorite way to use Copilot with April Yoho from GitHub π€β¨ https://t.co/sVg21DwdlT
Want to make a big impact in robotic research? Come and work with robots and the smartest students @StanfordSVL! We are hiring a software developer, focusing on simulation for robotics & robotic learning. You'll be working directly with me , @jiajunwu_cs and our amazing students and researchers. Please apply at: https://t.co/jr2hSqMKiP