Your curated collection of saved posts and media
@bephrem Jon Chuβs philosophies and work ethic are very Silicon Valley, heβs proud to be from here. https://t.co/jhO6vwFBTL Historically, SF has been a source of inspiration and darling for Francis Ford Coppola, David Fincher, and Clint Eastwood.
@RoKhanna @Jason bread winning wife, bread losing husband https://t.co/EPKdfpjVvD
@JayaGup10 Capital deployed has surpassed 2021, havenβt seen average time to markup but my guess is our view is skewed because press covers successful raises, not company formations. Plenty of high velocity early stage firms writing checks in companies where multi-stage firms are conflicted out. Iβd be more concerned with competition and conflicts than high valuations.
Jeff Bezos and Lauren SΓ‘nchez are kicking off the new year in St. Barts. π https://t.co/G3fHZKrzRh
If anyone is tired of family-friendly holiday films, I highly suggest The Assessment with Elizabeth Olsen and Himesh Patel. Near-term sci-fi where youβre evaluated on whether or not you can have kids. Assessor lives with you and puts you through hell to see when youβll snap. https://t.co/SfneBDVa7F
What is the market? Is this the right founder for the market? Thatβs the trillion dollar question for the AI boom. @martin_casadoβs new episode is filled with insights for the next year. https://t.co/6G0Ly4Bkyy
https://t.co/aPnhSXqZPQ continuous claude v2 is now up - a setup designed to tackle the scarcest resource in coding: context explaining the reasoning behind features below β
https://t.co/aPnhSXqZPQ continuous claude v2 is now up - a setup designed to tackle the scarcest resource in coding: context explaining the reasoning behind features below β
A MASSIVE 303 page study from the very best Chinese Labs. The paper explains how code focused language models are built, trained, and turned into software agents that help run parts of development. These models read natural language instructions, like a bug report or feature request, and try to output working code that matches the intent. The authors first walk through the training pipeline, from collecting and cleaning large code datasets to pretraining, meaning letting the model absorb coding patterns at scale. They then describe supervised fine tuning and reinforcement learning, which are extra training stages that reward the model for following instructions, passing tests, and avoiding obvious mistakes. On top of these models, the paper surveys software engineering agents, which wrap a model in a loop that reads issues, plans steps, edits files, runs tests, and retries when things fail. Across the survey, they point out gaps like handling huge repositories, keeping generated code secure, and evaluating agents reliably, and they share practical tricks that current teams can reuse.
OpenAI, Anthropic, and Google AI engineers use 10 internal prompting techniques that guarantee near-perfect accuracyβ¦and nobody outside the labs is supposed to know them. Here are 10 of them (Save this for later): https://t.co/clWkf4BbZm
GLM 4.7 has now taken #2 on Website Arena It is #1 overall amongst all open weight models and ranks just behind Gemini 3 Pro Preview, a 15-place jump from GLM 4.6 Huge congrats to the team at @Zai_org for this meaningful contribution! https://t.co/s0BlIiH4pL
A must-bookmark for vibe-coders. @YCombinatorβs guide to making the most of vibe coding: https://t.co/TX0tGkWTFv

A must-bookmark for vibe-coders. @YCombinatorβs guide to making the most of vibe coding: https://t.co/TX0tGkWTFv

Curated list of AI memory tools https://t.co/AnNmfe2Uq0 https://t.co/lBroLI8BVf

Curated list of AI memory tools https://t.co/AnNmfe2Uq0 https://t.co/lBroLI8BVf

BREAKING: OpenAI just released Prompt Packs for every job. 300+ ready-to-use prompts for: β IT β Sales β Product β Managers β Engineers β Marketing β Executives β Customer Success https://t.co/S8CRJToewf
I built a new Python CLI tool called claude-code-transcripts that can create nice readable HTML versions of your Claude Code sessions, both local and pulled from Claude Code for web, and makes it easy to publish them online too https://t.co/pHl8l2lXeK
Soprano: An instant, ultra-lightweight TTS model for realistic speech; generates 10 hours of 32kHz audio in <20s; streams with <15ms latency using just 80M params & <1GB VRAM. Has some limitations and drawbacks. https://t.co/BZmckav7mW https://t.co/gWi1qpevWi
MiniMax M2.1 is OPEN SOURCE: SOTA for real-world dev & agents β’ SOTA on coding benchmarks (SWE / VIBE / Multi-SWE) β’ Beats Gemini 3 Pro & Claude Sonnet 4.5 β’ 10B active / 230B total (MoE) Not just SOTA, faster to infer, easier to deploy, and yes, you can even run it locally Weights: https://t.co/3lYeI6qyg2

Transcribes and summarizes meetings locally using small language models https://t.co/qrJkQuYdWS https://t.co/AGg4LvZQyX

Transcribes and summarizes meetings locally using small language models https://t.co/qrJkQuYdWS https://t.co/AGg4LvZQyX

Wow. Anthropic just curated an impressive collection of use cases for Claude π€― You already get 39 deep guides and more get added weekly. Itβs also free and definitely worth bookmarking. (link below) https://t.co/t1FUE24fvP
Memory in the Age of AI Agents This 102-page survey introduces a unified framework for understanding agent memory through three lenses: Forms, Functions, and Dynamics. https://t.co/Mn357FOH15
@simonw When Claude stops, you can use a stop hook to poke it to keep going. eg. see https://t.co/4WW1baGEeM
Hugging Face has released a 214-page MASTERCLASS on how to train LLMs > itβs called The Smol Training Playbook > and if want to learn how to train LLMs, > this GIFT is for you > this training bible walks you through the ENTIRE pipeline > covers every concept that matters from why you train, > to what you train, to how you actually pull it off > from pre-training, to mid-training, to post-training > it turns vague buzzwords into step-by-step decisions > architecture, tokenization, data strategy, and infra > highlights the real-world gotchas > instabilities, scaling headaches, debugging nightmares > distills lessons from building actual > state-of-the-art LLMs, not just toy models how modern transformer models are actually built > tokenization: the secret foundation of every LLM > tokenizer fundamentals > vocabulary size > byte pair encoding > custom vs existing tokenizers > all the modern attention mechanisms are here > multi-head attention > multi-query attention > grouped-query attention > multi-latent attention > every positional encoding trick in the book > absolute position embedding > rotary position embedding > yaRN (yet another rotary network) > ablate-by-frequency positional encoding > no position embedding > randomized no position embedding > stability hacks that actually work > z-loss regularization > query-key normalization > removing weight decay from embedding layers > sparse scaling, handled > mixture-of-experts scaling > activation ratio tuning > choosing the right granularity > sharing experts between layers > load balancing across experts > long-context handling via ssm > hybrid models: transformer plus state space models data curation = most of your real model quality > data curation is the main driver of your modelβs actual quality > architecture alone wonβt save you > building the right data mixture is an art, > not just dumping in more web scrapes > curriculum learning, adaptive mixes, ablate everything > you need curriculum learning: > design data mixes hat evolve as training progresses > use adaptive mixtures that shift emphasis > based on model stage and performance > ablate everything: run experiments to systematically > test how each data source or filter impacts results > smollm3 data > the smollm3 recipe: balanced english web data, > broad multilingual sources, high-quality code, and diverse math datasets > without the right data pipeline, > even the best architecture will underperform the training marathon > do your preflight checklist or die > check your infrastructure, > validate your evaluation pipelines, > set up logging, and configure alerts > so you donβt miss silent failures > scaling surprises are inevitable > things will break at scale in ways they never did in testing > vanishing throughput? that usually means > youβve got a hidden shape mismatch or > batch dimension bug killing your GPU utilization > sudden drops in throughput? > check your software stack for inefficiencies, > resource leaks, or bad dataloader code > seeing noisy, spiky loss values? > your data shuffling is probably broken, > and the model is seeing repeated or ordered data > performance worse than expected? > look for subtle parallelism bugs > tensor parallel, data parallel, > or pipeline parallel gone rogue > monitor like your GPUs depend on it (because they do) > watch every metric, track utilization, spot anomalies fast > mid-training is not autopilot > swap in higher-quality data to improve learning, > extend the context window if you want bigger inputs, > and use multi-stage training curricula to maximize gains > the difference between a good model and a failed run is > almost always vigilance and relentless debugging during this marathon post-training > post-training is where your raw base model > actually becomes a useful assistant > always start with supervised fine-tuning (sft) > use high-quality, well-structured chat data and > pick a solid template for consistent turns > sft gives you a stable, cost-effective baseline > donβt skip it, even if you plan to go deeper > next, optimize for user preferences > direct preference optimization (dpo), > or its variants like kernelized (kto), > online (orpo), or adversarial (apo) > these methods actually teach the model > what βbetterβ looks like beyond simple mimicry > once youβve got preference alignment,go on-policy: > reinforcement learning from human feedback (rlhf) > or on-policy distillation, which lets your model learn > from real interactions or stronger models > this is how you get reliability and sharper behaviors > the post-training pipeline is where > assistants are truly sculpted; > skipping steps means leaving performance, > safety, and steerability on the table infra is the boss fight > this is where most teams lose time, > money, and sanity if theyβre not careful > inside every gpu > youβve got tensor cores and cuda cores for the heavy math, > plus a memory hierarchy (registers, shared memory, hbm) > that decides how fast you can feed data to the compute units > outside the gpu, your interconnects matter > pcie for gpu-to-cpu, > nvlink for ultra-fast gpu-to-gpu within a node, > infiniband or roce for communication between nodes, > and gpudirect storage for feeding massive datasets > straight from disk to gpu memory > make your infra resilient: > checkpoint your training constantly, > because something will crash; > monitor node health so you can kill or restart > sick nodes before they poison your run > scaling isnβt just βadd more gpusβ > you have to pick and tune the right parallelism: > data parallelism (dp), pipeline parallelism (pp), tensor parallelism (tp), > or fully sharded data parallel (fsdp); > the right combo can double your throughput, > the wrong one can bottleneck you instantly to recap > always start with WHY > define the core reason youβre training a model > is it research, a custom production need, or to fill an open-source gap? > spec what you need: architecture, model size, data mix, assistant type > transformer or hybrid > set your model size > design the right data mixture > decide what kind of assistant or > use case youβre targeting > build infra for the job, plan for chaos, pick your stability tricks > build infrastructure that matches your goals > choose the right GPUs > set up reliable storage > and plan for network bottlenecks > expect failures, weird bugs, > and sudden bottlenecks at scale > select your stability tricks in advance: > know which techniques youβll use to fight loss spikes, > unstable gradients, and hardware hiccups closing notes > the pace of LLM development is relentless, > but the underlying principles never go out of style > and this PDF covers what actually matters > no matter how fast the field changes > systematic experimentation is everything > run controlled tests, change one variable at a time, and document every step > sharp debugging instincts will save you > more time (and compute budget) than any paper or library > deep knowledge of both your software stack > and your hardware is the ultimate unfair advantage; > know your code, know your chips > in the end, success comes from relentless curiosity, > tight feedback loops, and a willingness to question everything > even your own assumptions if i had this two years ago, it would have saved me so much time > if youβre building llms, > read this before you burn gpu months happy hacking
Using a mocap suit to kick yourself in the balls with a robot is a great metaphor to close out 2025. https://t.co/G1hY5Fd6YF
Understanding Git Worktrees. A great Git feature in times of agentic AI https://t.co/Fo7Qnfceze
VideoRAG - [KDD'2026] "VideoRAG: Chat with Your Videos" https://t.co/Xm8wsnDUzx
VideoRAG - [KDD'2026] "VideoRAG: Chat with Your Videos" https://t.co/Xm8wsnDUzx
@simonw When Claude stops, you can use a stop hook to poke it to keep going. eg. see https://t.co/4WW1baGEeM
Claude Code is truly amazing. I just single shotted a linux app for my ancient outdoor camera system. Now I can make some more enhancements and have a functioning app I want. Will it make me a lot of money maybe not but with AI coding tools I can scratch itches I have had. https://t.co/1fd2r6GlBH
One of the underrated papers this year: "Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation Is Wasteful" (https://t.co/0O4XjGDLIP) (I can confirm this holds for RLVR, too! I have some experiments to share soon.) https://t.co/Vy6yVeGqiK