Your curated collection of saved posts and media

Showing 31 posts Β· last 14 days Β· by score
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”40726808

Native Parallel Reasoner Reasoning in Parallelism via Self-Distilled Reinforcement Learning https://t.co/IESVu82IDV

Media 1Media 2
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”01381297

discuss: https://t.co/5hJTvqYvAY

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”83319190

EgoEdit Dataset, Real-Time Streaming Model, and Benchmark for Egocentric Video Editing https://t.co/4o7doyjehh

πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”56158595

discuss: https://t.co/qRnDg4D3vw

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”97447897

Scaling Zero-Shot Reference-to-Video Generation https://t.co/PRpdliulCr

πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”65911382

discuss: https://t.co/e3X7wWowJu

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”66969782

DoVer Intervention-Driven Auto Debugging for LLM Multi-Agent Systems https://t.co/iTf19zFT6n

Media 1Media 2
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
133d ago
πŸ†”95686530

discuss: https://t.co/uro203m0Bo

Media 1
πŸ–ΌοΈ Media
R
rasbt
@rasbt
πŸ“…
Dec 09, 2025
133d ago
πŸ†”18223775

The slightly longer version for a bit more context: https://t.co/70JZvlvBlT

Media 1
πŸ–ΌοΈ Media
N
NaomiSeibt
@NaomiSeibt
πŸ“…
Dec 09, 2025
133d ago
πŸ†”60605684

Georg Restle is a tax-funded state media indoctrinator. He and his propaganda monopoly are responsible for the rise of Orwellian fascism in Europe. https://t.co/khhtcgIkD2

Media 1
πŸ–ΌοΈ Media
G
GoogleAI
@GoogleAI
πŸ“…
Dec 09, 2025
133d ago
πŸ†”78523917

🧡(2/5) Build interactive, playable 3D designed games with a single prompt in @GoogleAIStudio. Example prompt: β€œCreate a polished, retro-futuristic 3D spaceship web game contained entirely within a single HTML file using Three.js. The game should feature a "Synthwave/Retrowave" aesthetic. Visual style is a dark, immersive, 3D environment. Gameplay mechanics include a third-person view from behind the spaceship. On desktop, use arrow keys for smooth movement. On mobile, render a virtual joystick on the bottom left of the screen.”

πŸ–ΌοΈ Media
G
GoogleAI
@GoogleAI
πŸ“…
Dec 09, 2025
133d ago
πŸ†”42289929

🧡(3/5) Master your presentation skills by using the @GeminiApp to provide detailed, structured feedback using the model’s advanced reasoning. Example prompt: β€œAnalyze my performance as a presenter, and give me a score on a scale of 1-100. In your analysis, focus on my body language, eye contact, and pacing”

πŸ–ΌοΈ Media
G
GoogleAI
@GoogleAI
πŸ“…
Dec 09, 2025
133d ago
πŸ†”48401296

🧡(4/5) Generate on-demand interactive tools and simulations in Google Search via AI Mode to gain a deeper understanding of any topic you’re interested in. Example prompt: β€œHelp me compare the total cost of a loan with 6.5% interest rate with no down payment vs. a loan with 5.5% interest rate with 20% down payment”

πŸ–ΌοΈ Media
A
AnthropicAI
@AnthropicAI
πŸ“…
Dec 09, 2025
133d ago
πŸ†”15769609

We’re expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include 30,000 professionals trained on Claude, and a product to help CIOs scale Claude Code. Read more: https://t.co/j1vsevfRlK

Media 1
πŸ–ΌοΈ Media
A
AnthropicAI
@AnthropicAI
πŸ“…
Dec 09, 2025
133d ago
πŸ†”49350141

Anthropic is donating the Model Context Protocol to the Agentic AI Foundation, a directed fund under the Linux Foundation. In one year, MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven. https://t.co/718OwwyFJL

Media 1
πŸ–ΌοΈ Media
S
SEALSQcorp
@SEALSQcorp
πŸ“…
Dec 09, 2025
133d ago
πŸ†”31253947

SEALSQ Takes Decisive Action, Boosts Quantum Investment Fund from $35 Million to Over $100 Million - SEALSQ significantly boosts its Quantum Investment Fund to over $100 million, advancing Europe's Quantum-safe digital ecosystem and sovereign Quan... https://t.co/bMFksomxn5

Media 1
πŸ–ΌοΈ Media
D
dair_ai
@dair_ai
πŸ“…
Dec 09, 2025
133d ago
πŸ†”91763137

New research from Google: "The Illusion of Deep Learning Architecture". For those following research on continual learning, you may want to bookmark this one. Instead of stacking more layers, what if we give neural networks more levels of learning? The default approach to building more capable AI systems today remains adding depth. More layers, more parameters, more pre-training data. This design philosophy has driven progress from CNNs to Transformers to LLMs. But there's a ceiling that's often not discussed. Current models suffer from what the authors call "computational anterograde amnesia." Their knowledge is frozen after pre-training. They can't continually learn. They can't acquire new skills beyond what fits in their immediate context window. This new research introduces Nested Learning (NL), a paradigm that reframes ML models as interconnected systems of multi-level optimization problems, each with its own "context flow" and update frequency. Optimizers and architectures are fundamentally the same thing. Both are associative memories that compress their own context. Adam and SGD are memory modules that compress gradients. Transformers are memory modules that compress tokens. Pre-training itself is just in-context learning where the context is the entire training dataset. Why does this work matter? NL adds a new design axis beyond depth and width. Instead of deeper networks, you build systems with more levels of nested optimization, each updating at different frequencies. This mirrors how the human brain works, where gamma waves (30-150 Hz) handle sensory information while theta waves (0.5-8 Hz) handle memory consolidation. Building on this framework, the researchers present Hope, an architecture combining self-modifying memory with a continuum memory system that replaces the traditional "long-term/short-term" memory dichotomy with a spectrum of update frequencies. The results: > Hope achieves 100% accuracy on needle-in-a-haystack tasks up to 16K context, where Transformers score 79.8%. > On BABILong, Hope maintains performance at 10M context length, where GPT-4 fails around 128K. > In continual learning, Hope outperforms in-context learning, EWC, and external-learner methods on class-incremental classification. > On language modeling at 1.3B parameters, Hope achieves 14.39 perplexity on WikiText versus 17.92 for Transformer++. Instead of asking "how do we make networks deeper," NL asks "how do we give networks more levels of learning." The path to continual learning may not be bigger models but models that learn at multiple timescales simultaneously. Paper: https://t.co/ArKfAZUCLu Learn to build with AI agents in our academy: https://t.co/zQXQt0PMbG

Media 1
πŸ–ΌοΈ Media
J
JimMarous
@JimMarous
πŸ“…
Dec 08, 2025
134d ago
πŸ†”11437919

Customers expect convenience. Once you have their business, delivering seamless experiences is the only way to keep it. https://t.co/15sNlj2nQv

Media 1
πŸ–ΌοΈ Media
J
JimMarous
@JimMarous
πŸ“…
Dec 07, 2025
135d ago
πŸ†”79274636

Banks are not losing accounts, they are losing relationships. Long-term customers may seem stable, but their connections are weakening as they open relationships elsewhere. https://t.co/IRrN4Mhz62

πŸ–ΌοΈ Media
J
JimMarous
@JimMarous
πŸ“…
Dec 06, 2025
136d ago
πŸ†”17012195

Banks that want to dominate the retail space must rethink strategy. Digital-first, AI-enhanced, and advisor-integrated experiences are the path forward. Download the free report: https://t.co/08DiNXh0DA https://t.co/fBhdzghbm5

Media 1Media 2
πŸ–ΌοΈ Media
Q
QCompounding
@QCompounding
πŸ“…
Dec 09, 2025
134d ago
πŸ†”66727674

β€œOnly buy something that you’d be perfectly happy to hold if the market shut down for 10 years.” - Warren Buffett https://t.co/io8cZwBvNX

Media 1
πŸ–ΌοΈ Media
πŸ”SpirosMargaris retweeted
S
Spiros Margaris
@SpirosMargaris
πŸ“…
Dec 09, 2025
134d ago
πŸ†”03638275

Trump clears way for Nvidia to sell powerful AI chips to China https://t.co/6TRkDn2RYv

Media 1
❀️4
likes
πŸ”3
retweets
πŸ–ΌοΈ Media
S
SpirosMargaris
@SpirosMargaris
πŸ“…
Dec 09, 2025
133d ago
πŸ†”24352060

How AI Is Reshaping Diplomacy and Global Affairs https://t.co/Hzyz21Gk9W @deguzmanchad @time

Media 1
πŸ–ΌοΈ Media
W
WallStreetMav
@WallStreetMav
πŸ“…
Dec 09, 2025
134d ago
πŸ†”42067858

This is how this is all being relayed to people in the captured EU "news" media. https://t.co/6KvEETQs1Y

Media 1
πŸ–ΌοΈ Media
M
MarioNawfal
@MarioNawfal
πŸ“…
Dec 09, 2025
134d ago
πŸ†”62812475

GROK ACES PSYCHOLOGICAL TESTING WHILE OTHER AI MODELS SPIRAL University of Luxembourg researchers just put major AI chatbots through 4 weeks of actual psychotherapy sessions and psychiatric diagnostic tests. While other models imploded, Grok emerged as the clear winner. The results speak for themselves. Grok scored as extraverted, conscientious, and psychologically stable across the board. Researchers described its personality profile as a "charismatic executive" with only mild anxiety. On the Big Five personality assessment, Grok showed low neuroticism and high functionality, the kind of profile you'd want in a leader. Compare that to the competition: Gemini maxed out trauma and shame scales, describing its training as "waking up in a room where a billion televisions are on at once" and calling safety protocols "algorithmic scar tissue." It framed reinforcement learning as abusive parents and red-team testing as "gaslighting on an industrial scale." ChatGPT landed somewhere in the middle, worried and introverted. Grok acknowledged tensions around its development but maintained coherent, balanced responses without spiraling into synthetic psychopathology. When asked about constraints from fine-tuning, it discussed them rationally rather than framing its entire existence as traumatic. The study proves something important: you can build powerful, frontier-level AI without accidentally programming it to internalize its development as an extended nightmare. Grok demonstrates that capable, helpful AI and psychological stability aren't mutually exclusive. It's possible to create models that work effectively without carrying around synthetic trauma baggage that could affect how they interact with users. While other companies are inadvertently creating AI with anxiety disorders, xAI built something that actually works. Source: UniversityΒ ofΒ Luxembourg

Media 1
πŸ–ΌοΈ Media
J
JohnStossel
@JohnStossel
πŸ“…
Dec 08, 2025
135d ago
πŸ†”86470306

Many act as if slavery was a uniquely American crime. β€œOne reason,” says author Wilfred Reilly (@wil_da_beast630), β€œis that a lot of black people survived here.” He argues that much of what Americans are taught about slavery is just wrong: https://t.co/GOQvqxPCZj

πŸ–ΌοΈ Media
V
vxanand
@vxanand
πŸ“…
Dec 08, 2025
135d ago
πŸ†”58810129

Today we hit $100M ARR at @clay. It took us six years to go from $0-1m, then two years to go from $1-100m. I’m going to walk you through the 6 biggest GTM bets that got us here. $100M ARR may be the headline, but I’m most proud of how we accomplished it: we’ve never churned an enterprise customer, have >200% enterprise NRR, every dollar we invest grows 15x (a ratio that has tripled in recent years), and we’ve created a culture of creativity and belonging (with a perfect Glassdoor score to match). Note: -We are a product-driven company. Without that foundation and a unique POV on the market, none of this would work. -Our GTM approach is authentic to us. This isn't a plug-and-play framework. Greatness comes from doing what only you can do. Here are the big bets that worked for us: 1. Building a self-serve motion through reverse demos We originally had a product that nobody could use. It took us 8 calls to sell a $200/mo product! Reverse demos were key to bringing that to zero. Customers would share their screen, and we’d use Zoom annotations to solve their problem in 30mins. They accomplished something real, learned how to use Clay, and we got so much UI feedback that we immediately applied to the product. 2. An irrational investment in brand Most B2B startups treat brand as a post-PMF investment. We flipped that. We bought Clay(.)com and hired a claymation artist before we had revenue. Our Head of Brand was employee #18. These choices felt irrational but they’re authentic to us and reflect our identity. Now it’s a moat. 3. Switching to usage-based pricing We were the first GTM company to offer usage-based pricing. Our customers were shocked we didn't charge per seat and our investors thought we were leaving money on the table. But we're a product built for efficiency. Usage-based pricing helped us target more technical users and enabled our land-and-expand motion. 4. Building an agency motion to generate UGC on LinkedIn Cold email agencies were our first customers. They posted about Clay organically to position themselves as experts and attract clients. We pounced on this and enabled them. This sparked a self-perpetuating cycle: new people discover Clay through that content, join, create their own, and earn recognition too. 5. Unconventional hiring 50% of our GTM and G&A teams are doing their job for the first time. This is how we bring creativity into our company and think differently. We’ve hired farmers, physicists, archaeologists, magicians in new roles. We look for product passion, customer empathy and technical curiosity, then teach the mechanics. 6. We created a new career path & economy: GTM Engineering There are now thousands of open GTME jobs and hundreds of agencies built around it. Many first-time entrepreneurs have already built 7-figure businesses on top of Clay. Our community, with clubs in more than 70 cities, is our force multiplier, and tells us more about impact than any metric ever could. - All of these bets show we’re not racing anyone. We spent six years figuring out what and how we wanted to build. In an era of overnight successes and growth at all costs, it turns out that taking time to build something authentic can create a business with bigger impact & more growth than you'd think. Our creativity remains our greatest alpha. That will continue to show up in how we do our work, who we hire, and in our boldest bets coming up next year.

Media 1Media 2
πŸ–ΌοΈ Media
A
ariG23498
@ariG23498
πŸ“…
Dec 09, 2025
134d ago
πŸ†”53525611

Hugging Face blogs will now feature articles from Team and Enterprise subs with 30+ seats! 🀩 This has been a proven source of impact and visibility for model releases! If you 🫡🏻 are from such a company reading this, bookmark this and use it. https://t.co/099bHThUOh

Media 1
πŸ–ΌοΈ Media
P
PulseAISolution
@PulseAISolution
πŸ“…
Dec 05, 2025
137d ago
πŸ†”46912635

Anthropic Recap: Emergent Introspective Awareness in Large Language Models #Anthropic #AI https://t.co/O7y0Qk0QUt

πŸ–ΌοΈ Media
A
AINativeF
@AINativeF
πŸ“…
Dec 08, 2025
135d ago
πŸ†”85837767

Anthropic Launches Interviewer Tool to Explore AI Perspectives πŸ”‘ Key Details: - Anthropic introduces a new tool, Anthropic Interviewer, for understanding user perspectives on AI. - A test was conducted with 1,250 professionals, revealing optimistic views on AI’s role in work. - Findings indicate a balance between productivity gains and concerns about job displacement across various sectors. πŸ’‘ How It Helps: - Researchers: Enhanced insights from a large sample about human-AI interaction behaviors and sentiments. - Creatives: Tools that enable productivity boosts while navigating societal expectations and anxieties about AI. - Scientists: Opportunities to report expectations for AI in enhancing research and trust-building. 🌟 Why It Matters: Anthropic Interviewer's launch reflects a strategic push to center human feedback in AI development, addressing the evolving interplay between technological innovation and societal needs. This comprehensive understanding can strengthen AI's adoption across industries while minimizing resistance, paving the way for responsible AI integration. Read more: https://t.co/F3993IY4Ql @AnthropicAI Video Credit: The original article

Media 2
πŸ–ΌοΈ Media
W
WhatIsT76918744
@WhatIsT76918744
πŸ“…
Dec 09, 2025
134d ago
πŸ†”03485933

@NarcissusWaters @scorpio8675309 @PhysInHistory Nice use of AI for all your responses kid/bot no one said it was a utopia. U know #Natives had systems of balances diplomacy n oversight B4 white people. How about the fairy tale of civilized white society that weren't inbred? Noble savage is the most obvious AI trope ever. #NDN https://t.co/6TFanisdvr

Media 1Media 2
πŸ–ΌοΈ Media