Your curated collection of saved posts and media
Worries in the dance. https://t.co/kRj98WLGVH
@nada667284 ูุณูุนุฏ ู ูุณูุขู ุจูููู ุฎ ููุฑ ๐บ๐๐ธ ุขุฎ ูุชู ูุฏู ๐ฆ๐ง๐ https://t.co/fGoXLRMRmj

๐จ ยซย ๐ฌ๐๐๐๐ก๐ ๐๐๐๐ ๐ ๐ฬ๐ง๐ฬ ๐ง๐ฅ๐ฬ๐ฆ ๐๐๐ฆ๐, ๐๐ข๐ก๐ง๐ฅ๐๐๐ฅ๐๐ ๐๐ก๐ง ๐ฬ ๐โ๐๐จ๐ง๐ฅ๐๐ฆ ๐๐ข๐จ๐๐จ๐ฅ๐ฆ, ๐๐ข๐ ๐ ๐ ๐๐๐๐ฅ๐๐.ย ๐ซ๐ท๐ฉ๐ฟย ยป Lโavis cash de Marouane Chamakh ๐ฒ๐ฆ sur les binationaux : ยซย Les binationaux ? ๐๐ ๐๐๐จ๐ง ๐๐ง๐ฅ๐ ๐๐ฅ๐๐ก๐, ne pas jouer avec deux nations, deux peuples (โฆ) Yacine Adli ๐ซ๐ท a รฉtรฉ trรจs cash, ๐ฐ๐ผ๐ป๐๐ฟ๐ฎ๐ถ๐ฟ๐ฒ๐บ๐ฒ๐ป๐ ๐ฎฬ ๐ฑโ๐ฎ๐๐๐ฟ๐ฒ๐ ๐ท๐ผ๐๐ฒ๐๐ฟ๐ ๐ฐ๐ผ๐บ๐บ๐ฒ ๐นโ๐ถ๐ป๐๐ฒ๐ฟ๐ป๐ฎ๐๐ถ๐ผ๐ป๐ฎ๐น ๐ณ๐ฟ๐ฎ๐ป๐ฐฬง๐ฎ๐ถ๐ ๐พ๐๐ถ ๐ฒ๐๐ ๐ฎฬ ๐ ๐ฎ๐ป๐ฐ๐ต๐ฒ๐๐๐ฒ๐ฟ ๐๐ถ๐๐ (Cherki), ou dโautres qui laissent planer le doute. Il (Adli) sโest peut-รชtre fait lyncher mais il a รฉtรฉ clair et net. ๐๐น ๐ปโ๐ฎ ๐ฝ๐ฎ๐ ๐ท๐ผ๐๐ฒฬ ๐ฑ๐ฒ ๐ฑ๐ผ๐๐ฏ๐น๐ฒ ๐ท๐ฒ๐, ๐ฝ๐ฒ๐๐-๐ฒฬ๐๐ฟ๐ฒ ๐พ๐โ๐ถ๐น ๐ฟ๐ฒ๐๐ถ๐ฒ๐ป๐ฑ๐ฟ๐ฎ ๐๐๐ฟ ๐๐ฎ ๐ฑ๐ฒฬ๐ฐ๐ถ๐๐ถ๐ผ๐ป, ๐บ๐ฎ๐ถ๐ ๐ฑ๐ฒฬ๐ ๐น๐ฒ ๐ฑ๐ฒฬ๐ฝ๐ฎ๐ฟ๐ ๐ถ๐น ๐ปโ๐ฎ ๐ฝ๐ฎ๐ ๐ท๐ผ๐๐ฒฬ ๐๐๐ฟ ๐ฐฬง๐ฎ.ย ยป (@Kampo_officiel)

what if you never stop training? Olmo 3.1 32B Think & Instruct out now! https://t.co/raYPRuIeon
Remember always listen to Karpathy https://t.co/qJwTpoesJh
what if you never stop training? Olmo 3.1 32B Think & Instruct out now! https://t.co/raYPRuIeon

So excited to finally talk about this work! Veo is a surprisingly strong world simulator. We fine-tuned Veo on action-conditioned, multi-view robotics data. Key result: running a policy in the world model is strongly correlated with real-world results. A few important take-aways: 1) Veo Robotics models real-world physics and robot interactions 2) The base model's world knowledge is retained after fine-tuning and can model OOD scenarios not seen in the robotics data 3) The world model can be used to score task success or failure for a given policy 4) This proves useful for predictive red teaming: simulate dangerous or rare scenarios that would be difficult or irresponsible to execute on the real robot, and judge its performance I couldn't be more excited about where generalist video models are headed.
Generalist robots need a generalist evaluator. But how do you test safety without breaking things? ๐ฅ ๐ Introducing our new work from @GoogleDeepMind: Evaluating Gemini Robotics Policies in a Veo World Simulator https://t.co/ZjvpYXFddZ ๐งต๐ https://t.co/h2ZbeEKYEn
Hanging out with hyper smart young people today (14-year-old @albysjourney and 20-year-old @blevlabs). I got the feeling his dad was like me. Struggling to keep up. San Francisco AI entrepreneurial learning day. This city is so crazy. Now out to dinner with an investor. Oh and both Tesla and Waymo rides are booked so had to take an Uber.
This is the new SpaceX ISS Docking Simulator game added in Teslaโs 2025 Holiday Update. Its the actual interface used by NASA Astronauts to manually pilot the SpaceX Dragon 2 vehicle to the International Space Station. https://t.co/8z0ZpqN6TY
Who's playing the @SpaceX Docking Simulator? https://t.co/m9Db0Bt8o7
Better than Black ๐ฐ In this clip from the upper stage of one of this weekโs Starlink launches, you can see the stack of sats and then a very shiny mirror that reflects the upper stage spinning away over Earth. So much cool engineering visible here: https://t.co/M0gYV0m2oR The first Starlink sats were black to avoid being visible at dawn and dusk (when the sun still illuminates them at high altitude, but people on the ground are in the dark). But black absorption heats up the satellites and is ~ 96% effective at absorbing light. Whatโs better? A near perfect mirror that reflects sunlight away from the satellite and from Earth, making the flat phased array antenna panels that always face Earth nearly invisible (reflecting over 99.9% of light). The core of the film is a Bragg mirror, a dielectric mirror film which includes many super thin layers of plastic with different refractive indices that create interference patterns internally to reflect light, but allow radio waves to pass through unimpeded. Phased array antennae are themselves amazing, a 2D grid of transceivers that collectively steer microwave beams to the Starlink terminals below, with no moving parts. You can also see one of my favorite engineering innovations SpaceX, developed with an integrated perspective on satellite and rocket design. At the end of the video, you see the upper stage of the Falcon 9 rocket spinning away. Prior to release of the stack of satellites, the entire upper stage goes into a slow spin. Then it pops a couple retention latches, and the stack of 27 satellites splay out like a deck of cards, each with a slightly different angular momentum given the distances from the centroid of spin. This tiny difference in velocity for each satellite gives them a greater separation over time as they orbit Earth. No springs needed; the whole system sets up the deployment dynamics in an elegant, minimalist manner. And this early phase is when they look like a string of pearls, still close together, and visible at dawn or dusk before they get the highly reflective surfaces rotated into proper position. At the top and bottom at 13 seconds in, you can see the two retention bars that hold the stack of sats in place during launch. Thatโs very little wasted weight versus a typical ESPA bus holding a bunch of satellites on a central metal cylinder as you see in Transporter missions.
Deployment of 27 @Starlink satellites confirmed https://t.co/3ZNICuEjIO
Why Ursula von der Leyen Isnโt Elected https://t.co/rL2lJ1Zopr
My Native DAM ๐ Good morning to All ๐ https://t.co/f2wY6PvxA7
The data is from a "paper" hosted on the author 90s ass loking webpage and revieved by the extremely Well regarded reasearcher .... Some anonymous guy from Hongkong that only ever reviewied this one "paper". A random post on twitter ia a better source lmao. https://t.co/lW5C2JNsst

Is that paper you are reading trustworthy, #neoTwitter? Safeguarding the integrity of scientific literature in the 21st century @Ped_Research https://t.co/ERtQXVZhH3 @EBNEO @ESPR_ESN @nicupodcast https://t.co/oNCMw1D5f2

Paper: https://t.co/zu0BCFH67C Code: https://t.co/ljZTA90IJ8 Model: https://t.co/tEfsyrkkAM Website: https://t.co/cTf5c5CigY #AI #LLM #Reasoning 11/

An accompanying News & Views by Bart Ghesquiere for this paper is now available! @VIBMetaboCoreLV https://t.co/jM2ljSwiB9 ๐https://t.co/KKGUGlCJTZ
๐ AI Native Daily Paper Digest - 2025-12-11๐ Follow @AINativeF for the latest insights on AI Native. Covering AI research papers from Hugging Face, featured in the image. ๐ก Stay updated with the latest research trends and dive deep into the future of AI! ๐ #AI #HuggingFace #AIPaper #AINative #AINF โ Appendix: Today's AI research papers โ 1. StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation 2. BrainExplore: Large-Scale Discovery of Interpretable Visual Representations in the Human Brain 3. Composing Concepts from Images and Videos via Concept-prompt Binding 4. OmniPSD: Layered PSD Generation with Diffusion Transformer 5. InfiniteVL: Synergizing Linear and Sparse Attention for Highly-Efficient, Unlimited-Input Vision-Language Models 6. HiF-VLA: Hindsight, Insight and Foresight through Motion Representation for Vision-Language-Action Models 7. Fast-Decoding Diffusion Language Models via Progress-Aware Confidence Schedules 8. EtCon: Edit-then-Consolidate for Reliable Knowledge Editing 9. Rethinking Chain-of-Thought Reasoning for Videos 10. WonderZoom: Multi-Scale 3D World Generation 11. UniUGP: Unifying Understanding, Generation, and Planing For End-to-end Autonomous Driving 12. Towards a Science of Scaling Agent Systems 13. Learning Unmasking Policies for Diffusion Language Models
2022ๅนดๆซใฎChatGPTใฎใชใชใผในๅพ๏ผๅปๅญฆๆ็ฎใซใใใฆ็นๅฎใฎๅ่ชใ่ชๅฅใฎไฝฟ็จ้ ปๅบฆใๅขๅ ใใฆใใ๏ผใใใ๏ผใใใใฎ็จ่ชใฎไฝฟ็จใฏ2022ๅนดไปฅๅใใๅขๅ ๅพๅใซใใ๏ผLLMใฎๅบ็พใๆขๅญใฎๅพๅใๅขๅน ใใใๅฏ่ฝๆงใ็คบๅใใใ๏ผ2750ไธไปถใฎๆง้ ๅๆ็ฎใฌใใฅใผใซใใ่ชๅฝๅๆ็ ็ฉถ Perspect Med Educ 2025 Dec.2 https://t.co/FvvRTCSYdX
๐ AI Native Daily Paper Digest - 2025-12-12๐ Follow @AINativeF for the latest insights on AI Native. Covering AI research papers from Hugging Face, featured in the image. ๐ก Stay updated with the latest research trends and dive deep into the future of AI! ๐ #AI #HuggingFace #AIPaper #AINative #AINF โ Appendix: Today's AI research papers โ 1. T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground 2. Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving 3. Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation 4. OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification 5. Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning 6. MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos 7. BEAVER: An Efficient Deterministic LLM Verifier 8. From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models 9. VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction 10. Thinking with Images via Self-Calling Agent 11. Evaluating Gemini Robotics Policies in a Veo World Simulator 12. StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space 13. Stronger Normalization-Free Transformers
@IsBeVerse Thatโs a retweet dumbass. I donโt know what RN Savior act youโre talking about but I do know that now youโre trying to be the savior when just the other day you were telling us how much you hate indigenous people. https://t.co/xQfWOuTApp

https://t.co/XJzDc7YRun
Lots of discussion on Jevons Paradox for AI: does cheaper AI lead to more total usage? New paper finds short-run elasticity ~1 (so no short-run paradox) but prices fell 1000x in two years & demand exploded. So Jevons happens over time, as firms gradually adopt AI at lower prices https://t.co/hKG6lWeuFd

This was interesting, suggesting: 1) The market for AI is very dynamic, new models and providers switch leads often, and different leaders exist for different spaces 2) A lot of companies have not leveraged the power of very smart AIs outside of coding and tech, going with cheap https://t.co/zNkoEwJH4L

๐ท๏ธ45,000 ๐งก๐ซงโ๏ธ https://t.co/VbPcKaKHGG

I did a free Ads for him and he wants to dash me one native wear but Iโll prefer to give one of my followers. Who need native wear ? https://t.co/qRjeM1CpsM
๐ท๏ธ45,000 ๐งก๐ซงโ๏ธ https://t.co/VbPcKaKHGG

Sakana AIใงใฏใไธ็ๆๅ ็ซฏใฎ่ชๅพๅใจใผใธใงใณใๆ่กใ้งไฝฟใใฆใๆช่ธใฎใฝใชใฅใผใทใงใณ้็บใซๆใApplied Research Engineerใๅ้ไธญใงใใ ๆ่กใฎ็คพไผๅฎ่ฃ ใใใใซๅ ้ใใใใใณใขใกใณใใผใจใใฆใฎๅ็ปใใๅพ ใกใใฆใใพใ๐ https://t.co/eQ7e0rIOmg ๏ผๆญฃ็คพๅกใปๅญฆ็ใคใณใฟใผใณๅใใๆญ่ฟใงใโจ๏ผ https://t.co/jnpuUp0ajJ

Applied Teamใซ้ขใใ่ฉณ็ดฐใซใคใใฆใฏใใใกใใฎ็ดนไปใใ่ฆงใใ ใใใ https://t.co/11hP67FI84 https://t.co/PxwqbnGjak

Applied Teamใซ้ขใใ่ฉณ็ดฐใซใคใใฆใฏใใใกใใฎ็ดนไปใใ่ฆงใใ ใใใ https://t.co/11hP67FI84 https://t.co/PxwqbnGjak

Analysis of the latest 1,000 posts in my AI Newsmakerโs list (the most followed 1,600 accounts here on X). Shows a lot of the value here on X that just goes by in an hour. Thanks @blevlabs for letting me use your cognitive AI to do these reports. Shows the hottest posts, and people in AI. R&D for the future of media. https://t.co/4tVEIegd3Q
This is wild! There's an AI that literally rewrites its own trading code to beat the market. Not tuning parameters. Not learning patterns. Actually rewriting the Python functions that decide when to buy and sell. Let me explain this insanity: Traditional trading bots work like this: Human codes strategy AI adjusts weights/parameters Strategy structure stays FIXED Market changes โ bot breaks Human fixes it manually This is exhausting and doesn't scale. ProFiT (Program Search for Financial Trading) does something completely different. It treats trading strategies as living organisms that evolve. Each strategy is actual Python code. Not weights. Not parameters. CODE. Here's the evolutionary loop: 1๏ธโฃ Start with a basic strategy (say, MACD crossover) 2๏ธโฃ LLM reads the code + performance report 3๏ธโฃ LLM diagnoses weaknesses 4๏ธโฃ LLM proposes improvements 5๏ธโฃ New code gets backtested 6๏ธโฃ If good โ kept in population 7๏ธโฃ Repeat forever The genius part? "Semantic mutation" Traditional genetic programming randomly flips bits of code (often breaking it). ProFiT's LLM actually understands what the code does: "This strategy lacks volatility filters. Add ATR-based gating to reduce false signals." LOGICAL evolution. And they don't keep just ONE best strategy. They maintain a POPULATION of all strategies that beat a minimum threshold. Why? Diversity prevents getting stuck in local optima. It's like keeping multiple species alive instead of just the "fittest" one. Quality-Diversity approach. Real results across 7 futures markets (E6, ES, Bitcoin, etc.): ๐ Beat Buy-and-Hold in 77% of cases ๐ Beat random strategies 100% of time ๐ +44% average return improvement over seed strategies ๐ +0.57 Sharpe ratio improvement Statistically significant (p < 0.05 on Wilcoxon tests) Let's look at one evolution path: Generation 0: Basic MACD crossover โ Returns: -54% โ 25 lines of code Generation 15: MACD + regime filter + ATR stops + volatility gates + debouncing โ Returns: +0.77% โ 90 lines of sophisticated logic The LLM built that complexity. How does this compare to prior work? ๐ด Reinforcement Learning: Optimizes weights, structure stays fixed ๐ด Classic GP: Random mutations, no reasoning ๐ด Codex/AlphaCode: One-shot generation, no iteration ๐ข ProFiT: Iterative, semantic, empirically grounded It's a NEW paradigm. Pain points this solves: โ Non-stationarity (markets change constantly) โ Code evolution adapts structure, not just params โ Black boxes you can't trust โ Human-readable Python you can inspect โ Constant human intervention โ Autonomous improvement loop 11/15 The validation methodology is RIGOROUS: 5-fold walk-forward cross-validation 2.5 years train, 6 months validation, 6 months test 10-day dormant windows to prevent lookahead bias Fixed transaction costs (0.2%) Multiple seed strategies tested This isn't overfit garbage. Inspiration comes from wild places: ๐งฌ Genetic Programming (Koza) ๐ค Gรถdel Machines (self-improving systems) ๐ฏ MAP-Elites (quality-diversity) ๐ง LLM code generation (Codex) They mashed it all together and pointed it at financial markets. Current limitation they acknowledge: Testing against FIXED historical data doesn't show how it adapts to real-time regime changes. They're working on that. (Imagine this running live, evolving strategies as the market shifts beneath it...) Future directions they hint at: Evolving the prompts themselves (meta-optimization) Cross-asset strategy evolution Multi-parent recombination between strategies Real-time deployment with continuous adaptation This is just the beginning. Bottom line: We're shifting from "training AI to predict markets" to "AI that rewrites how it thinks about markets." Not parameter learning. Strategy evolution. The paper: "ProFiT: Program Search for Financial Trading" by Siper et al. Wild times ahead. ๐
เธเธดเธงเธเธดเนเธ : เธเธกเนเธญเธฒเนเธเธเธเธงเธดเนเธเนเน #RerouteEcoreviveHarmony #bbillkin https://t.co/a2RnzWlWA5
@PalmerReport How they function in the world is beyond me. https://t.co/dXxwsEVX2v

#NativeAmerican #NativeTwitter #native #NativeCamp https://t.co/OA3GEvLevg
