Your curated collection of saved posts and media

Showing 32 posts ยท last 14 days ยท by score
D
deedydas
@deedydas
๐Ÿ“…
Aug 14, 2025
260d ago
๐Ÿ†”82311696

"Teach Yourself Computer Science" is the best resource to learn CS. 2 weeks into vibe coding and non-technical people feel the pain. "I really wish I was technical. I just don't know how to proceed." It takes ~1000hrs across 9 topics to understand CS with any depth. https://t.co/hOc7CrR2oV

Media 1
๐Ÿ–ผ๏ธ Media
B
briannekimmel
@briannekimmel
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”40433166

Many of my female friends started taking creatine for the first time this year and the bubblegum pink gummies and drink packets are very lol The Erewhon-ification of everything is pretty impressive https://t.co/At1RyCSx93

Media 1
๐Ÿ–ผ๏ธ Media
M
mervenoyann
@mervenoyann
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”80877406

Meta released DINOv3 ๐Ÿ”ฅ > 12 sota image models (ConvNeXT and ViT) in various sizes, trained on web and satellite data! > use for anything: image classification to segmentation, depth or even video tracking ๐Ÿคฏ > day-0 support from transformers ๐Ÿค— > allows commercial use! ๐Ÿ˜ https://t.co/6C0oJmEfWe

Media 1
๐Ÿ–ผ๏ธ Media
U
UnslothAI
@UnslothAI
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”88366883

Google releases Gemma 3 270M, a new model that runs locally on just 0.5 GB RAM.โœจ Trained on 6T tokens, it runs fast on phones & handles chat, coding & math. Run at ~50 t/s with our Dynamic GGUF, or fine-tune via Unsloth & export to your phone. Details: https://t.co/CuD1KaxkDf

@osanseviero โ€ข Thu Aug 14 16:04

Introducing Gemma 3 270M ๐Ÿ”ฅ ๐ŸคA tiny model! Just 270 million parameters ๐Ÿง  Very strong instruction following ๐Ÿค– Fine-tune in just a few minutes, with a large vocabulary to serve as a high-quality foundation https://t.co/E0BB5nlI1k https://t.co/XntprMBqSC

Media 1
๐Ÿ–ผ๏ธ Media
V
victorialslocum
@victorialslocum
๐Ÿ“…
Aug 13, 2025
260d ago
๐Ÿ†”41044274

"Just fine-tune your embeddings" they said. "It'll fix your RAG system" they said. They were wrong. Here's what actually works: After working with countless retrieval systems, I've noticed a pattern: teams often jump straight to fine-tuning when their vector search underperforms. But that's like replacing your car engine when you might just need better tires. ๐—™๐—ถ๐—ฟ๐˜€๐˜, ๐—ฑ๐—ฒ๐—ฏ๐˜‚๐—ด ๐—ฏ๐—ฒ๐—ณ๐—ผ๐—ฟ๐—ฒ ๐˜†๐—ผ๐˜‚ ๐—ณ๐—ถ๐—ป๐—ฒ-๐˜๐˜‚๐—ป๐—ฒ: Before spending time and compute on fine-tuning, ask yourself: โ€ข Do many queries need exact keyword matches? โ†’ Try hybrid search first โ€ข Are your chunks oddly split or lacking context? โ†’ Experiment with different chunking techniques like late chunking โ€ข Is the model missing general semantic relationships? โ†’ Try a larger model or one with more dimensions โ€ข Is it only failing on your specific domain terminology? โ†’ NOW we're talking fine-tuning territory ๐—ช๐—ต๐—ฒ๐—ป ๐—ณ๐—ถ๐—ป๐—ฒ-๐˜๐˜‚๐—ป๐—ถ๐—ป๐—ด ๐—บ๐—ฎ๐—ธ๐—ฒ๐˜€ ๐˜€๐—ฒ๐—ป๐˜€๐—ฒ: Fine-tuning shines when off-the-shelf models can't grasp your domain-specific language. Pre-trained models learn from Wikipedia and web crawls - they don't know your company's product names or industry jargon. The payoff can be substantial: โ€ข Better retrieval = better RAG performance โ€ข Smaller fine-tuned models can outperform larger general ones โ€ข Lower costs and latency for domain-specific tasks ๐—ง๐—ต๐—ฒ ๐˜๐—ฒ๐—ฐ๐—ต๐—ป๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฑ๐—ฒ๐—ฒ๐—ฝ-๐—ฑ๐—ถ๐˜ƒ๐—ฒ: Fine-tuning embedding models isn't like fine-tuning LLMs. It's all about adjusting distances in vector space using contrastive learning. Three main approaches: 1. ๐— ๐˜‚๐—น๐˜๐—ถ๐—ฝ๐—น๐—ฒ ๐—ก๐—ฒ๐—ด๐—ฎ๐˜๐—ถ๐˜ƒ๐—ฒ๐˜€ ๐—ฅ๐—ฎ๐—ป๐—ธ๐—ถ๐—ป๐—ด ๐—Ÿ๐—ผ๐˜€๐˜€: Just needs query-context pairs. Treats other examples in the batch as negatives - elegant and popular 2. ๐—ง๐—ฟ๐—ถ๐—ฝ๐—น๐—ฒ๐˜ ๐—Ÿ๐—ผ๐˜€๐˜€: Requires (anchor, positive, negative) triplets. Great for precise control but finding good hard negatives is tricky 3. ๐—–๐—ผ๐˜€๐—ถ๐—ป๐—ฒ ๐—˜๐—บ๐—ฏ๐—ฒ๐—ฑ๐—ฑ๐—ถ๐—ป๐—ด ๐—Ÿ๐—ผ๐˜€๐˜€: Uses similarity scores between sentence pairs. Perfect when you have gradients of similarity ๐—ฃ๐—ฟ๐—ฎ๐—ฐ๐˜๐—ถ๐—ฐ๐—ฎ๐—น ๐—ฐ๐—ผ๐—ป๐˜€๐—ถ๐—ฑ๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ผ๐—ป๐˜€: โ€ข Start with 1,000-5,000 high-quality samples for narrow domains โ€ข Plan for 10,000+ for complex specialized terminology โ€ข Good news: fine-tuning can run on consumer GPUs or free Google Colab for smaller models โ€ข Always evaluate against a baseline - use metrics like MRR, Recall@k, or NDCG ๐—ฃ๐—ฟ๐—ผ ๐˜๐—ถ๐—ฝ: The MTEB leaderboard is your friend for finding base models, but remember - leaderboard performance doesn't always translate to your specific use case. The bottom line? Fine-tuning is powerful but it's not a magic bullet. Sometimes your retrieval problems need a different solution entirely. Debug systematically, and when you do fine-tune, start small and iterate. Check out the full technical blog - it includes code examples for both Hugging Face and AWS SageMaker integrations: https://t.co/PH1djlDFDt

Media 1
๐Ÿ–ผ๏ธ Media
F
firecrawl_dev
@firecrawl_dev
๐Ÿ“…
Aug 13, 2025
260d ago
๐Ÿ†”87735393

Announcing Open Lovable ๐Ÿ”ฅ We've built an open-source AI web app builder that can transform any website URL into a working, editable clone, giving you a foundation to build on instantly. All powered by @GroqInc, @e2b, and Firecrawl. https://t.co/GjOXb6yjB6

๐Ÿ–ผ๏ธ Media
H
Hesamation
@Hesamation
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”42362210

NASA published a free Systems Engineering book that you can self study. Itโ€™s definitely worth to touch on this topic, it turns you into a hiring magnet for many companies. Also, systems thinking is absolutely critical for every AI application. https://t.co/jrDdwQaAzK

Media 1Media 2
๐Ÿ–ผ๏ธ Media
T
TencentHunyuan
@TencentHunyuan
๐Ÿ“…
Aug 14, 2025
260d ago
๐Ÿ†”73631656

๐Ÿš€We are thrilled to open-source Hunyuan-GameCraft, a high-dynamic interactive game video generation framework built on HunyuanVideo. It generates playable and physically realistic videos from a single scene image and user action signals, empowering creators and developers to "direct" games with first-person or third-person perspectives. Key Advantages: ๐Ÿ”นHigh Dynamics: Unifies standard keyboard inputs into a shared continuous action space, enabling high-precision control over velocity and angle. This allows for the exploration of complex trajectories, overcoming the stiff, limited motion of traditional models. It can also generate dynamic environmental content like moving clouds, rain, snow, and water flow. ๐Ÿ”นLong-term Consistency: Uses hybrid history condition to preserve the original scene information after significant movement. ๐Ÿ”นSignificant Cost Reduction: No need for expensive modeling/rendering. PCM distillation compresses inference steps, boosting speed and lowering costs. This allows the quantized 13B model to run on consumer-grade GPUs like the RTX 4090. Project Page: https://t.co/uAbiu9FRzF Code: https://t.co/WgppVz1KUq Technical Report: https://t.co/aO8plomaTr Hugging Face๏ผšhttps://t.co/2ZOUWm6KKQ

Media 2
+1 more
๐Ÿ–ผ๏ธ Media
N
NielsRogge
@NielsRogge
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”61207995

Awesome week for computer vision @huggingface๐Ÿ”ฅ Besides DINOv3 we added support for LLMDet, the SOTA for zero-shot object detection (@CVPR '25 highlight) Detect instances in scenes just via prompting, no training involved. https://t.co/N1KeXeP9x6

๐Ÿ–ผ๏ธ Media
Q
QGallouedec
@QGallouedec
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”88950020

๐Ÿšจ Big news! We decided that @huggingfaceโ€™s post-training library, TRL, will natively supports training Vision Language Models ๐Ÿ–ผ๏ธ This builds on our recent VLM support in SFTTrainer โ€” and weโ€™re not stopping until TRL is the #1 VLM training library ๐Ÿฅ‡ More here ๐Ÿ‘‰ https://t.co/gjO5npVT5t Huge thanks to @mervenoyann , @SergioPaniego , and @ariG23498 ๐Ÿ”ฅ

Media 1Media 2
๐Ÿ–ผ๏ธ Media
A
Alibaba_Qwen
@Alibaba_Qwen
๐Ÿ“…
Aug 14, 2025
260d ago
๐Ÿ†”02078559

Qwen3-30B-A3B-Instruct โ€” with just 3B active parameters, itโ€™s closing in on the performance of far larger models. Easily deploy locally or Try it now: https://t.co/AmiW3QgjM4 https://t.co/MmQ1DJ88lc

Media 1
๐Ÿ–ผ๏ธ Media
C
charliermarsh
@charliermarsh
๐Ÿ“…
Aug 13, 2025
260d ago
๐Ÿ†”16985241

Today, we're announcing our first hosted infrastructure product: pyx, a Python-native package registry. We think of pyx as an optimized backend for uv: itโ€™s a package registry, but it also solves problems that go beyond the scope of a traditional "package registry". https://t.co/ZYQe06uZTD

Media 1
๐Ÿ–ผ๏ธ Media
C
charmcli
@charmcli
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”59619450

A CLI (with a TUI option!) for working with databases including SQLite, libSQL, PostgreSQL, MySQL, and MariaDB. Oh yessss ๐Ÿ’…๐Ÿฝ Built with โค๏ธ and - Bubble Tea: TUI framework - Lipgloss: Styling and layout https://t.co/XHE4wlPxiz

Media 1
๐Ÿ–ผ๏ธ Media
U
UnslothAI
@UnslothAI
๐Ÿ“…
Aug 08, 2025
265d ago
๐Ÿ†”67729075

You can now fine-tune OpenAI gpt-oss for free with our notebook! Unsloth trains 1.5x faster with -70% VRAM, 10x longer context & no accuracy loss. 20b fits in 14GB & 120b in 65GB GPU. Guide: https://t.co/kdLMAfBwsw GitHub: https://t.co/2kXqhhvLsb Colab: https://t.co/0ErdGWkhgH

Media 1Media 2
๐Ÿ–ผ๏ธ Media
C
CShorten30
@CShorten30
๐Ÿ“…
Aug 13, 2025
260d ago
๐Ÿ†”76733909

I am SUPER EXCITED to publish the 127th epsiode of the Weaviate Podcast featuring Lakshya A. Agrawal (@LakshyAAAgrawal)! ๐ŸŽ™๏ธ๐ŸŽ‰ Lakshya is the lead author of GEPA: Reflective Prompt Evolution can Outperform Reinforcement Learning! GEPA is a huge step forward for automated prompt optimization, @DSPyOSS, and the broader scope of integrating LLMs with optimization algorithms! The podcast discusses all sorts of areas of GEPA from the Reflective Prompt Mutation to Pareto-Optimal Candidate Selection, Test-Time Training, the LangProBe Benchmark, and more!! ๐Ÿ“œ I had so much fun chatting about these things with Lakshya! I really hope you enjoy the podcast! ๐ŸŽ™๏ธ

Media 1
๐Ÿ–ผ๏ธ Media
H
HamelHusain
@HamelHusain
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”36986594

If you aren't looking at your data this guy shows up @vishal_learner https://t.co/9JXr32xZK1

Media 1
๐Ÿ–ผ๏ธ Media
H
HamelHusain
@HamelHusain
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”67064553

@gregce10 @vishal_learner Sound on https://t.co/AfDEqj3Lbf

๐Ÿ–ผ๏ธ Media
D
drew_bent
@drew_bent
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”85162246

We launched a Claude Code learning mode! Claude Code not only makes you more productive, but can now also help you get better at coding. Whether you're a CS student or seasoned programmer, it will push you to think deeper about the code you're generating. https://t.co/WCWhmOSXFx

๐Ÿ–ผ๏ธ Media
A
AIatMeta
@AIatMeta
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”51831584

Introducing DINOv3: a state-of-the-art computer vision model trained with self-supervised learning (SSL) that produces powerful, high-resolution image features. For the first time, a single frozen vision backbone outperforms specialized solutions on multiple long-standing dense prediction tasks. Learn more about DINOv3 here: https://t.co/lQpKhJLTZQ

๐Ÿ–ผ๏ธ Media
T
teslayoda
@teslayoda
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”98686426

๐ŸšจBREAKING! Tesla is now hiring Vehicle Operators in Delhi and Mumbai. ๐Ÿ‡ฎ๐Ÿ‡ณ๐Ÿ‘€ https://t.co/U7UGP1Ryjm

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”omarsar0 retweeted
O
elvis
@omarsar0
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”51501991

Speed Always Wins Very nice and comprehensive new report on recent efficient architectures for LLMs. https://t.co/X1VRpLj2kN

Media 1
โค๏ธ443
likes
๐Ÿ”68
retweets
๐Ÿ–ผ๏ธ Media
L
llama_index
@llama_index
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”13171061

New walkthrough: Build web-scraping AI agents with @brightdata and LlamaIndex's agentic framework. ๐ŸŒ Learn how to give your AI agents reliable web access ๐Ÿ”ง Set up robust web scraping workflows that can handle dynamic content ๐Ÿค– Build intelligent agents that can navigate, extract, and process web data at scale Read the full walkthrough: https://t.co/66cfh6tdzx

Media 1
๐Ÿ–ผ๏ธ Media
O
osanseviero
@osanseviero
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”73663291

Introducing Gemma 3 270M ๐Ÿ”ฅ ๐ŸคA tiny model! Just 270 million parameters ๐Ÿง  Very strong instruction following ๐Ÿค– Fine-tune in just a few minutes, with a large vocabulary to serve as a high-quality foundation https://t.co/E0BB5nlI1k https://t.co/XntprMBqSC

Media 1
๐Ÿ–ผ๏ธ Media
R
rasbt
@rasbt
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”31713307

Gemma 3 270M! Great to see another awesome, small open-weight LLM for local tinkering. Here's a side-by-side comparison with Qwen3. Biggest surprise that it only has 4 attention heads! https://t.co/Iy7O0DsQGu

@osanseviero โ€ข Thu Aug 14 16:04

Introducing Gemma 3 270M ๐Ÿ”ฅ ๐ŸคA tiny model! Just 270 million parameters ๐Ÿง  Very strong instruction following ๐Ÿค– Fine-tune in just a few minutes, with a large vocabulary to serve as a high-quality foundation https://t.co/E0BB5nlI1k https://t.co/XntprMBqSC

Media 1
๐Ÿ–ผ๏ธ Media
R
rasbt
@rasbt
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”71381631

Adding it to the benchmark figure, it doesn't fair that badly against the >2x larger Qwen3: https://t.co/kT6yrdJ6UT

Media 1
๐Ÿ–ผ๏ธ Media
J
jxnlco
@jxnlco
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”20387352

Trust in vertical AI isn't about model accuracy, it's about proving you understand the customer's context. Chris Lovejoy shared with us how dynamic knowledge retrieval + domain expert reviews create systems that adapt to how organizations actually operate, not how we think they do. Talks like the one that happened today and so much more, starting up again in September. https://t.co/BlKvx4tH16

Media 1Media 2
๐Ÿ–ผ๏ธ Media
J
jxnlco
@jxnlco
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”91212244

if you want to check out the full 6 week course just enroll here for 20% off https://t.co/5k1FfU7q9O

Media 1
๐Ÿ–ผ๏ธ Media
T
trustfundterry
@trustfundterry
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”76796093

https://t.co/yX2iSK9dyr

Media 1
๐Ÿ–ผ๏ธ Media
๐Ÿ”jxnlco retweeted
T
Trust Fund Terry
@trustfundterry
๐Ÿ“…
Aug 14, 2025
259d ago
๐Ÿ†”76796093

https://t.co/yX2iSK9dyr

Media 1
โค๏ธ9,076
likes
๐Ÿ”167
retweets
๐Ÿ–ผ๏ธ Media
J
jxnlco
@jxnlco
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”53279368

How can I get a @tbpm hat :( https://t.co/D2giV6TTmX

Media 1
๐Ÿ–ผ๏ธ Media
M
marouen19
@marouen19
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”74269182

@0xcoconutt The prequel https://t.co/3IuZsCzk74

Media 1
๐Ÿ–ผ๏ธ Media
M
marouen19
@marouen19
๐Ÿ“…
Aug 15, 2025
259d ago
๐Ÿ†”55711009

Hereโ€™s a selfie with Irene. Can we now close that partnership? https://t.co/nID0SbKufh

Media 1
๐Ÿ–ผ๏ธ Media