Your curated collection of saved posts and media
There should be a word for this trait. Something like "selfmaking". You make your self by making things yourself.
The Chef Boyardee Culinary Institute
The Chef Boyardee Culinary Institute
@bendee983 I got it from @math_rachel :)
I literally have an ongoing cancer experiment where 100% of the untreated and control animals have had to be euthanized while 100% of the treatment animals are seemingly unaffected. But we're still extremely far away from "proving that it works." Science is hard.
This is a really interesting thread. If we literally already have a cure for (some kinds of) cancer, but can't *prove* it's "safe and effective", should terminally ill patients have an option to use it anyway?
Since we *already* can do n=1 custom cancer vaccines for *pets* at a price that's economic for at least some people, the "we can't scale it" issue doesn't seem entirely true either?
Gradually unfollowing and usually blocking people who post AI writing passing it off as their own. Several hundred blocked so far Writing has historically been useful as proof-of-thought. AI writing tends to be proof-of-performing-thought. These are not the same. Indeed, the latter tends also to be proof-of-lack-of-thought, and blocking is the appropriate action
@michael_nielsen Darn, unfollowed even though I donβt do that
@BEBischof entropy maxing
@noyeahobviously didnβt realize we had our own version of that Murray Hill bro account
@dchaplot We'll take bets on how long you will last π
@QRJ211 @andrewztan Totally possible in industry research labs.
@TitanUranus @andrewztan We're talking about compensations in industry, not about academic salaries nor salaries in public research institutions, which I agree are way too low in France.
@BEBischof Itβs a mirror π
@BEBischof Does it use lots of acronyms
Bought some sunglasses to commemorate fast mode https://t.co/AC1LmXPKe8
living in sf is amazing because itβs March and Iβm seeing sunshine, butterflies , and hummingbirds everywhere
My allergies are crazy right now.
@EthanLipnik What project?
Happy #piday 3.14............................................................ @iScienceLuvr @sopranotiara wow this was more than a decade ago! Cute! https://t.co/Pz733JXnjD
@Devinbuild The revolution is still growing. Awesome!
Continual Learning from Experience and Skills for Multimodal Agents
New research on LLM Agent Generalization. RL fine-tuning makes agents strong in familiar environments, but it struggles to transfer across unseen ones. This paper systematically studies RL generalization for LLM agents across three axes: within-environment transfer across task difficulty, cross-environment transfer to unseen settings, and sequential multi-environment training. Within an environment, RL delivers massive gains. Training on easy WebShop tasks improves hard task performance by 60+ points. Easy-to-hard curriculum learning adds another 2-3 points on top. Across environments, transfer is weak. Agents average only 3.3-3.4 point improvements on unseen environments. Training on BabyAI actually drops WebShop from 28.6 to 10.3. Sequential training is where it gets interesting. Training across five environments sequentially achieves performance comparable to joint training, with minimal forgetting. The authors claim that RL fine-tuning doesn't produce generally capable agents out of the box. But sequential training across diverse environments offers a practical path to broad competence. Paper: https://t.co/BYfVK3DPoH Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c