Your curated collection of saved posts and media
may I present https://t.co/9i3jTgUIgn
Anyone who skips @billions_ntwk will regret it. This network is made for real humans, not the noise. I'm locked on $BILL @jgonzalezferrer https://t.co/GY81iY4VCD

Workshop confirmed! Level up your AI creative skills at DevFest Cairo. Discover how to blend image editing, voice, and video into stunning assets using Google AI. Feeling bold? Bring your best profile photos, the stage is yours. #GoogleDeveloperExpert #AI https://t.co/bL8upyqhgb

I love this figure from Anthropic's new talk on "Skills > Agents". Here are my notes: The more skills you build, the more useful Claude Code gets. And it makes perfect sense. Procedural knowledge and continuous learning for the win! Skills essentially are the way you make Claude Code more knowledgeable over time. This is why I had argued that Skills is a good name for this functionality. Claude Code acquires new capabilities from domain experts (they are the ones building skills). Claude Code can evolve the skills as needed and forget the ones it doesn't need anymore. It's a collaborative effort, which can easily be expanded to entire teams, communities, and orgs (via plugins). Skills are particularly useful for workflows where information and requirements constantly change. Finance, code, science, and human-in-the-loop workflows are all great use cases for Skills. You can build new Skills using the built-in skill creation tool, so you are always building new skills with all the best practices. Or you can do what I did, which is build my own skill creator to build custom skills catered to the work I do. Just more levels of customization that Skills also enables. Skills flexibility enables future capabilities to be easily integrated everywhere. Competitors don't have anything remotely close to this type of ecosystem. The deep understanding of Anthropic engineers on the importance of better context management tools and agent harnesses is something to admire. Very bullish on Claude Code.
"This book changed me. We are takers. We take from each other. We take from the animals. We take from the land..." #nativeamericans https://t.co/ElnFZUjmCK https://t.co/rDRALlbcbP

โWe depend on nature not only for our physical survival, we also need nature to show us the way home, the way out of the prison of our own minds.โ ~ E. Tolle, https://t.co/AcqZ03kp5o

If youโre traveling to a #nationalpark this summer, youโre traveling to #Nativelands. Today, the @NatlParkService is reconciling with its past by collaborating with Tribal communities to maintain healthy ecosystems for future generations. Learn more: https://t.co/YKSU85Zafl https://t.co/DrZnFzVwxn

https://t.co/4RyTAQkRkf - Rharos Network Teams Launch Native Rowa Loan Lending Barrier #NFTNews
Slow day have signal tho. https://t.co/qfxdOk6ZpG
[๊ณ์ํด์ ๋ณด๊ธ๋๋ ๋ ๋ ๋ค์ค์ ์ผํ @APEPE_MEME ] ๊ตญ๋ด ๊ฑฐ๋์์์๋ ์์ฅ๋์ด ์๋ APEPE์ ๋๋ค. ๋ฐํ ํฐ ๋๊ฐ๊ฐ ๊ฒฐํฉํ ํ๋ก์ ํธ๋ก ์ต๊ทผ ๋น ๋ฅด๊ฒ ์ฑ์ฅํ๊ณ ์์ต๋๋ค! ์ด๋ฒคํธ ๋ณด์์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค! ์์ 80๋ช : ์ต์ข ๋ฆฌ๋๋ณด๋ ์์๋ก ๋ณด์์ ์ฐจ๋ฑ ์ง๊ธ ์ฐธ์ฌ์ ๊ธ๋ฆผ ๋ํ: ์นํจ 5๋ง๋ฆฌ, ์ปคํผ 100์ 2์ฃผ ์กฐ๊ธ ์๋๋ ์๊ฐ์ด๊ณ ๋ณด์ํ๋ ์ด 5,000USDT์ด๋ ๊ผญ ์ฐธ์ฌํด๋ณด์๊ธธ ๋ฐ๋๋๋น! ๊ฐ์ธ์ ์ผ๋ก๋ ๊ธ ์ฃผ์ ๊ฐ ์๊ฐ์ด ์๋๊ธด ํ๋๋ฐ... ๊ธฐ์กด ์ผํ์ชฝ๊ณผ ์ฎ์ด๋ณด๋์ง ํด์ผ ํ ๊ฑฐ ๊ฐ์ต๋๋ค!(๋ฌผ๋ก ๋ค์ค ํ๊ทธ๋ ์๋ฉ๋๋น)
[@APEPE_MEME ์ผํํ์๋ ๋ถ๋ค์ ์ด๊ฑธ ์ฌ์ฉํด๋ณด์๋ฉด ์ข์ ๊ฒ ๊ฐ์์] ํ๋ฆฌ๋ฏธ์ -> X Pro์์ include:nativeretweets (filter:self_threads OR -filter:nativeretweets -filter:retweets -filter:replies) @APEPE_MEME filter:blue_verified lang:ko ๊ฒ์์ฐฝ์ ์ด๊ฑธ ์ ๋ ฅํ์๋ฉด ๊ฐ์ ํ๋ก์ ํธ ์ผํํ์๋ ํ๊ตญ ๋ถ๋ค ๊ฒ์ ๊ฐ๋ฅํฉ๋๋ค ๋ ์ ๋์ฃ ? ์ฌ์ค ์ ๊ธฐ ํธ๋ค ์์ or @ ์ํ๋ ํ๋ก์ ํธ ์ ๋ ฅํ๋ฉด ๋ณธ์ธ์ด ์ผํํ๋ ํ๋ก์ ํธ๋ค ๊ฐ์ด ๊ฒธํด์ ๊ฒ์ ๊ฐ๋ฅ. ์ด๋ฌ๋ฉด ์ข์ ๊ฒ ๊ฐ์ด ์ํธ๋ฐ๊ธฐ๋ ์ฝ๊ณ ์ํธ ํ๊ธฐ๋ ์ฝ์ต๋๋น. ์ ๋ ์ฌ์ค @DOKDODAO ์ผํ์ ๋ค ์ด๋ ๊ฒ ์ํธํฉ๋๋ค ๋๋ฌด ๋ง์์ ๋ฌธ์ ์ง ใ ใ
[๊ณ์ํด์ ๋ณด๊ธ๋๋ ๋ ๋ ๋ค์ค์ ์ผํ @APEPE_MEME ] ๊ตญ๋ด ๊ฑฐ๋์์์๋ ์์ฅ๋์ด ์๋ APEPE์ ๋๋ค. ๋ฐํ ํฐ ๋๊ฐ๊ฐ ๊ฒฐํฉํ ํ๋ก์ ํธ๋ก ์ต๊ทผ ๋น ๋ฅด๊ฒ ์ฑ์ฅํ๊ณ ์์ต๋๋ค! ์ด๋ฒคํธ ๋ณด์์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค! ์์ 80๋ช : ์ต์ข ๋ฆฌ๋๋ณด๋ ์์๋ก ๋ณด์์ ์ฐจ๋ฑ ์ง๊ธ ์ฐธ์ฌ์ ๊ธ๋ฆผ ๋ํ: ์นํจ 5๋ง๋ฆฌ, ์ปคํผ 100์ 2์ฃผ ์กฐ๊ธ ์๋๋ ์๊ฐ์ด๊ณ ๋ณด์ํ๋ ์ด 5,000USDT์ด๋ ๊ผญ ์ฐธ์ฌํด๋ณด์๊ธธ ๋ฐ๋๋๋น! ๊ฐ์ธ์ ์ผ๋ก๋ ๊ธ ์ฃผ์ ๊ฐ ์๊ฐ์ด ์๋๊ธด ํ๋๋ฐ... ๊ธฐ์กด ์ผํ์ชฝ๊ณผ ์ฎ์ด๋ณด๋์ง ํด์ผ ํ ๊ฑฐ
Native. Permissionless. Instant. its only happen @RialoHQ so, welcome!! https://t.co/t1zWZ87l2E

Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Keep the poetic mirror, but mark where it stops explaining and starts flattering our desire to see a mind in a transformer. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw
Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Keep the poetic mirror, but mark where it stops explaining and starts flattering our desire to see a mind in a transformer. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw
Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Keep the poetic mirror, but mark where it stops explaining and starts flattering our desire to see a mind in a transformer. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw
Bold Lacanian read on AI hallucination, but the analogy leans on heavy anthropomorphic baggage. All LLM outputs start the same: every token is just next token prediction. A continuation becomes a hallucination only when a human adds real world context the model never had. There is no psyche trying to fill a lack. Personality in LLMs is RLHF rewarding fluency, not truth. Apparent traits are prompt shaped data artefacts as in Han et al 2025 arXiv:2509.03730. Self reported Big Five maps to behaviour in about 24 percent of cases. This is a stochastic funnel, not a barred subject. The confidence in hallucinations is not Lacanian jouissance. It is the Eliza effect. We project coherence and intention, then blame the model for a mismatch created by our own projection. Great paper, but it needs a reminder to flag every anthropomorphic move with the actual technical context. Call out when you are interpreting output after the fact, not describing how it was produced, and avoid projecting human traits that do not exist. Follow for more insights or subscribe to receive updates in your inbox: https://t.co/DybOvoBDEw
@AndrewYang AI = actually Indians The encrapification of all things will continue. https://t.co/jps8ooTg42

@narindertweets You know how racism works. You're a pro.... https://t.co/ZgCzYuYVz6

A stunning drone and fireworks show lighting up Guangzhou at night https://t.co/cY9Ot5MmMm
A stunning drone and fireworks show lighting up Guangzhou at night https://t.co/cY9Ot5MmMm
As the first Native player in the @NWSL, @gohaam, who is Navajo, San Felipe Pueblo & Black, is making history. But who made Madison into the player & person she has become? The army of women who raised her, she says. Watch the full doc on YouTube (link in bio). https://t.co/LEatnhgd6S
@Scobleizer @Wassieweb3 @autkast @briansolis @ServiceNow https://t.co/L7vgYmVTub
today, we're launching Mosaic Avatars. create realistic AI UGC content with natural looking personas, dynamic movement, and product placement. comment "FREEGEN" to get unlimited free generations for the next 2 days. this release comes with 3 key features. https://t.co/i6VEzFIfNn
https://t.co/IoswhL2RXI
Judi Dench makes shocking defense of Harvey Weinstein after rape conviction: โHeโs done his timeโ https://t.co/nttWLcPaPO https://t.co/EzDcKXpvCP
https://t.co/IoswhL2RXI

Mowalola by Aidan Zamiri for The NATIVE ๐ง๐พโโ๏ธ https://t.co/Sh6BKI8MnD

some very important life lessons from @amaarae โจ๐๐พ https://t.co/azEZt6YXhD
as Native American Heritage Month ends, I want to recognize all Native Americans & the communities they come from. each community has a unique culture and story to tell, & I am grateful for the opportunity to tell my own & represent my team, my family, & my people. #Katishyameh https://t.co/wniXMo2U3u

๐จ DIGITAL COVER ALERT ๐จ @DamsonIdris for The NATIVE 8PM TONIGHT https://t.co/e2a9dWjOPl
This has to be acknowledged and is really important. Not because I want to call out one person but because itโs indicative of how our larger culture continues to perpetuate harmful stereotypes about Native Americans and Indigenous cultures. (1/3) https://t.co/RGnKEdjGnW

"Harvard has ruined more good negroes than bad whiskey." https://t.co/SlBH1cKp7T

Chinaโs open-source AI is a national advantage The models are akin to studying together to ace a test instead of relying on individual knowledge https://t.co/KO0nvXCFPH @kaifulee @ft