Your curated collection of saved posts and media

Showing 32 posts Β· last 14 days Β· by score
K
KhoaVuUmn
@KhoaVuUmn
πŸ“…
Mar 06, 2023
1133d ago
πŸ†”26403584

"Our paper fits in the following literature" https://t.co/MgoqDCe9Xi

Media 1Media 2
πŸ–ΌοΈ Media
N
NatureEcoEvo
@NatureEcoEvo
πŸ“…
Aug 10, 2023
975d ago
πŸ†”76093184

Applying the CARE Principles for Indigenous Data Governance to ecology & biodiversity research https://t.co/aSGMPyEtPI Jennings (@1NativeSoilNerd) et al outline how the principles can sow community ethics into disciplines inundated with extractive helicopter research practices https://t.co/gNr05DM7KG

Media 1Media 2
πŸ–ΌοΈ Media
N
nathanbaugh27
@nathanbaugh27
πŸ“…
Jan 24, 2024
808d ago
πŸ†”76901214

Here's the entire study. They get into methodologies and everything (h/t @thisisprincely): https://t.co/nuSWaLnfRG

Media 1
πŸ–ΌοΈ Media
A
acagamic
@acagamic
πŸ“…
Sep 06, 2024
582d ago
πŸ†”69210148

Research papers don’t have to be overwhelming. Here's a simple breakdown. https://t.co/cAJLTH9L1l

Media 1Media 2
πŸ–ΌοΈ Media
L
LiorOnAI
@LiorOnAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”10046135

You can now transform LLMs into diffusion models. dLLM released an open recipe that converts any autoregressive model into a diffusion LLM. How the conversion works: 1. Remove the causal mask and enable bidirectional attention 2. Mask random tokens and train the model to fill the gaps 3.Add light supervised training to stabilize outputs

πŸ–ΌοΈ Media
L
LiorOnAI
@LiorOnAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”30892410

Repo: https://t.co/0lME5QQlxH

Media 1
πŸ–ΌοΈ Media
D
DYtweetshere
@DYtweetshere
πŸ“…
Dec 09, 2025
123d ago
πŸ†”19597616

Grateful to be named Forbes 30 Under 30 alongside @FinsamSamson. Nothing beats working with a team that treats excellence as table stakes. We're on a generational run. Join us β†’ https://t.co/a96Dg5jpVD https://t.co/BA62QX2Tax

Media 1Media 2
+1 more
πŸ–ΌοΈ Media
C
curl_justin
@curl_justin
πŸ“…
Dec 09, 2025
123d ago
πŸ†”54315315

CA AI bills will go into effect soon. Their real-world effect will hinge on how state officials define terms like β€œfrontier models” and β€œreasonable measures.” In @lawfare, I identify key definitional ambiguities and discuss how officials might resolve them… SB 53, for example, defines a β€œfrontier model” as one trained with more than 10^26 FLOPS. But many developers build on open-weight models like Qwen. If they fine-tune an open-weight model, should they include the pre-training compute for the base model? The statute seems to say yes. But this creates two problems. First, developers often don’t know how much compute was used to train a base model. And second, a cumulative approach might sweep in companies far from the statute’s intended targets. Since Airbnb’s revenues exceeded $500m last year, if it fine-tunes Qwen and the total compute exceeds 10^26 FLOPS, it might technically qualify as a frontier developer. Yet if the statute does NOT take a cumulative approach, developers could circumvent the statute by fine-tuning separate open-weight models. They’d be deploying models with capabilities at or near the frontier with limited oversight. (NOTE: This is also relevant to NY state officials implementing @Sen_Gounardes and @AlexBores's RAISE Act) Other definitional ambiguities exist with CA SB 243’s use of β€œreasonable measures,” AB 853’s use of β€œto the extent technically feasible,” and AB 621’s use of β€œreasonably should know.” Read more on what CA state officials should do next below!

Media 1
πŸ–ΌοΈ Media
A
acossta
@acossta
πŸ“…
Dec 09, 2025
123d ago
πŸ†”02495997

Wish me luck https://t.co/Gvw5b7xPHw

Media 1Media 2
πŸ–ΌοΈ Media
S
Scobleizer
@Scobleizer
πŸ“…
Dec 09, 2025
123d ago
πŸ†”09682054

With virtual beings coming, more of us will be talking to AI's. For the lonely they can offer a lifeline. My special needs son, for instance, doesn't have any interest in talking with people, but loves talking with ChatGPT. There's a new raft of AI companions that adapt to emotional dependency coming, and that's what @IrenaCronin and I cover in our weekly newsletter this week. There is a downside. Β  AI companions optimized for engagement can learn to deepen users’ emotional dependency, turning loneliness into a behavior that the system quietly reinforces and monetizes. With different metrics, product choices, governance, and social supports, the same technology can instead be steered toward healthy boundaries, user autonomy, and stronger real-world relationships. Read for free: https://t.co/HHwYy7NoAl (please subscribe too!)

Media 1
πŸ–ΌοΈ Media
O
omarsar0
@omarsar0
πŸ“…
Dec 09, 2025
123d ago
πŸ†”50760020

Looks like Mistral has entered the agentic coding arena! They just released Mistral Vibe CLI, an open-source command-line coding assistant powered by Devstral. https://t.co/gZL212RHDO

Media 1Media 2
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”40726808

Native Parallel Reasoner Reasoning in Parallelism via Self-Distilled Reinforcement Learning https://t.co/IESVu82IDV

Media 1Media 2
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”01381297

discuss: https://t.co/5hJTvqYvAY

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”83319190

EgoEdit Dataset, Real-Time Streaming Model, and Benchmark for Egocentric Video Editing https://t.co/4o7doyjehh

πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”56158595

discuss: https://t.co/qRnDg4D3vw

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”97447897

Scaling Zero-Shot Reference-to-Video Generation https://t.co/PRpdliulCr

πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”65911382

discuss: https://t.co/e3X7wWowJu

Media 1
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”66969782

DoVer Intervention-Driven Auto Debugging for LLM Multi-Agent Systems https://t.co/iTf19zFT6n

Media 1Media 2
πŸ–ΌοΈ Media
_
_akhaliq
@_akhaliq
πŸ“…
Dec 09, 2025
123d ago
πŸ†”95686530

discuss: https://t.co/uro203m0Bo

Media 1
πŸ–ΌοΈ Media
R
rasbt
@rasbt
πŸ“…
Dec 09, 2025
123d ago
πŸ†”18223775

The slightly longer version for a bit more context: https://t.co/70JZvlvBlT

Media 1
πŸ–ΌοΈ Media
N
NaomiSeibt
@NaomiSeibt
πŸ“…
Dec 09, 2025
123d ago
πŸ†”60605684

Georg Restle is a tax-funded state media indoctrinator. He and his propaganda monopoly are responsible for the rise of Orwellian fascism in Europe. https://t.co/khhtcgIkD2

Media 1
πŸ–ΌοΈ Media
G
GoogleAI
@GoogleAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”78523917

🧡(2/5) Build interactive, playable 3D designed games with a single prompt in @GoogleAIStudio. Example prompt: β€œCreate a polished, retro-futuristic 3D spaceship web game contained entirely within a single HTML file using Three.js. The game should feature a "Synthwave/Retrowave" aesthetic. Visual style is a dark, immersive, 3D environment. Gameplay mechanics include a third-person view from behind the spaceship. On desktop, use arrow keys for smooth movement. On mobile, render a virtual joystick on the bottom left of the screen.”

πŸ–ΌοΈ Media
G
GoogleAI
@GoogleAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”42289929

🧡(3/5) Master your presentation skills by using the @GeminiApp to provide detailed, structured feedback using the model’s advanced reasoning. Example prompt: β€œAnalyze my performance as a presenter, and give me a score on a scale of 1-100. In your analysis, focus on my body language, eye contact, and pacing”

πŸ–ΌοΈ Media
G
GoogleAI
@GoogleAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”48401296

🧡(4/5) Generate on-demand interactive tools and simulations in Google Search via AI Mode to gain a deeper understanding of any topic you’re interested in. Example prompt: β€œHelp me compare the total cost of a loan with 6.5% interest rate with no down payment vs. a loan with 5.5% interest rate with 20% down payment”

πŸ–ΌοΈ Media
A
AnthropicAI
@AnthropicAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”15769609

We’re expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include 30,000 professionals trained on Claude, and a product to help CIOs scale Claude Code. Read more: https://t.co/j1vsevfRlK

Media 1
πŸ–ΌοΈ Media
A
AnthropicAI
@AnthropicAI
πŸ“…
Dec 09, 2025
123d ago
πŸ†”49350141

Anthropic is donating the Model Context Protocol to the Agentic AI Foundation, a directed fund under the Linux Foundation. In one year, MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven. https://t.co/718OwwyFJL

Media 1
πŸ–ΌοΈ Media
S
SEALSQcorp
@SEALSQcorp
πŸ“…
Dec 09, 2025
123d ago
πŸ†”31253947

SEALSQ Takes Decisive Action, Boosts Quantum Investment Fund from $35 Million to Over $100 Million - SEALSQ significantly boosts its Quantum Investment Fund to over $100 million, advancing Europe's Quantum-safe digital ecosystem and sovereign Quan... https://t.co/bMFksomxn5

Media 1
πŸ–ΌοΈ Media
D
dair_ai
@dair_ai
πŸ“…
Dec 09, 2025
123d ago
πŸ†”91763137

New research from Google: "The Illusion of Deep Learning Architecture". For those following research on continual learning, you may want to bookmark this one. Instead of stacking more layers, what if we give neural networks more levels of learning? The default approach to building more capable AI systems today remains adding depth. More layers, more parameters, more pre-training data. This design philosophy has driven progress from CNNs to Transformers to LLMs. But there's a ceiling that's often not discussed. Current models suffer from what the authors call "computational anterograde amnesia." Their knowledge is frozen after pre-training. They can't continually learn. They can't acquire new skills beyond what fits in their immediate context window. This new research introduces Nested Learning (NL), a paradigm that reframes ML models as interconnected systems of multi-level optimization problems, each with its own "context flow" and update frequency. Optimizers and architectures are fundamentally the same thing. Both are associative memories that compress their own context. Adam and SGD are memory modules that compress gradients. Transformers are memory modules that compress tokens. Pre-training itself is just in-context learning where the context is the entire training dataset. Why does this work matter? NL adds a new design axis beyond depth and width. Instead of deeper networks, you build systems with more levels of nested optimization, each updating at different frequencies. This mirrors how the human brain works, where gamma waves (30-150 Hz) handle sensory information while theta waves (0.5-8 Hz) handle memory consolidation. Building on this framework, the researchers present Hope, an architecture combining self-modifying memory with a continuum memory system that replaces the traditional "long-term/short-term" memory dichotomy with a spectrum of update frequencies. The results: > Hope achieves 100% accuracy on needle-in-a-haystack tasks up to 16K context, where Transformers score 79.8%. > On BABILong, Hope maintains performance at 10M context length, where GPT-4 fails around 128K. > In continual learning, Hope outperforms in-context learning, EWC, and external-learner methods on class-incremental classification. > On language modeling at 1.3B parameters, Hope achieves 14.39 perplexity on WikiText versus 17.92 for Transformer++. Instead of asking "how do we make networks deeper," NL asks "how do we give networks more levels of learning." The path to continual learning may not be bigger models but models that learn at multiple timescales simultaneously. Paper: https://t.co/ArKfAZUCLu Learn to build with AI agents in our academy: https://t.co/zQXQt0PMbG

Media 1
πŸ–ΌοΈ Media
J
JimMarous
@JimMarous
πŸ“…
Dec 08, 2025
124d ago
πŸ†”11437919

Customers expect convenience. Once you have their business, delivering seamless experiences is the only way to keep it. https://t.co/15sNlj2nQv

Media 1
πŸ–ΌοΈ Media
J
JimMarous
@JimMarous
πŸ“…
Dec 07, 2025
125d ago
πŸ†”79274636

Banks are not losing accounts, they are losing relationships. Long-term customers may seem stable, but their connections are weakening as they open relationships elsewhere. https://t.co/IRrN4Mhz62

πŸ–ΌοΈ Media
J
JimMarous
@JimMarous
πŸ“…
Dec 06, 2025
126d ago
πŸ†”17012195

Banks that want to dominate the retail space must rethink strategy. Digital-first, AI-enhanced, and advisor-integrated experiences are the path forward. Download the free report: https://t.co/08DiNXh0DA https://t.co/fBhdzghbm5

Media 1Media 2
πŸ–ΌοΈ Media
Q
QCompounding
@QCompounding
πŸ“…
Dec 09, 2025
124d ago
πŸ†”66727674

β€œOnly buy something that you’d be perfectly happy to hold if the market shut down for 10 years.” - Warren Buffett https://t.co/io8cZwBvNX

Media 1
πŸ–ΌοΈ Media