Your curated collection of saved posts and media

Showing 24 posts Β· last 7 days Β· quality filtered
P
PyTorch
@PyTorch
πŸ“…
Feb 25, 2026
17d ago
πŸ†”28099635

Matt White will speak at Dubai AI Festival, joining a panel discussion on who powers the global AI infrastructure. The session brings together leaders shaping the systems, platforms, and governance models that underpin AI at scale. As General Manager of AI at Linux Foundation, Matt will share perspectives from the open ecosystem driving AI development and deployment worldwide. πŸ“ Dubai World Trade Centre πŸ“… April 7–8, 2026 πŸ”— Learn more: https://t.co/GzABsuPw0b #PyTorch #AIInfrastructure #OpenSourceAI

Media 1Media 2
πŸ–ΌοΈ Media
P
PyTorch
@PyTorch
πŸ“…
Feb 26, 2026
17d ago
πŸ†”93814643

🚨 Prices for #PyTorchCon Europe go up tomorrow, 27 Feb at 23:59 CET! Register for just €449 and join us from 7-8 April in Paris. Two days of #GenAI, large-scale training, performance engineering, and real-world #PyTorch deployments. Secure your pass now! https://t.co/mE4FXCU7aD

Media 2
πŸ–ΌοΈ Media
P
PyTorch
@PyTorch
πŸ“…
Feb 27, 2026
16d ago
πŸ†”81739765

⏰ Final hours. Early bird rates for #PyTorchCon Europe end TODAY at 23:59 CET. Join us 7-8 April in Paris. Technical sessions, poster presentations, community expo, and the Flare Party. πŸ”₯ Schedule: https://t.co/HBq4XhQLd2 Register: https://t.co/uzweFa52qd https://t.co/NWKdVCKdQO

Media 2
+1 more
πŸ–ΌοΈ Media
P
PyTorch
@PyTorch
πŸ“…
Feb 27, 2026
15d ago
πŸ†”63945800

Our February PyTorch Foundation newsletter is live: Inside: leadership and membership announcements, live schedule for PyTorch Conference Europe 2026 in Paris, a recap of PyTorch Day India 2026, ambassador highlights, and recent technical blogs from across the ecosystem. πŸ“¨ Subscribe for updates delivered directly to your inbox: https://t.co/PdpCPZTflP πŸ‘‰ Read: https://t.co/RFWKZrc94Q #PyTorch #PyTorchCon #AIInfrastructure #OpenSourceA

Media 1
πŸ–ΌοΈ Media
P
PyTorch
@PyTorch
πŸ“…
Feb 27, 2026
15d ago
πŸ†”82600869

Are you tired of hand-tuning each model you develop? What if you could describe the architecture once and let a system apply graph transformations and optimized kernels? NVIDIA TensorRT LLM AutoDeploy marks a shift toward approaching inference optimization as a compiler and runtime responsibility rather than a burden on the model author. This approach enables faster experimentation, broader model coverage, and a cleaner separation between model design and deployment. Learn more from our documentation, examples scripts, and the blog, β€˜Automating Inference Optimizations with NVIDIA TensorRT LLM AutoDeploy’ Read the full post: https://t.co/qlIO7q35oY #PyTorch #OpenSourceAI #AI #Inference #Innovation

Media 1
πŸ–ΌοΈ Media
P
PyTorch
@PyTorch
πŸ“…
Feb 27, 2026
15d ago
πŸ†”02511741

New to the PyTorch Ecosystem Landscape: Kubetorch. Kubetorch enables ML research and development on Kubernetes across training, inference, RL, evals, data processing, and more, in a simple and unopinionated package. Learn more: https://t.co/YadOKc3sQo #PyTorch #Kubernetes #MLOps #AIInfrastructure

Media 1
πŸ–ΌοΈ Media
H
hardmaru
@hardmaru
πŸ“…
Feb 28, 2026
15d ago
πŸ†”84979625

Doc-to-LoRA: Learning to Instantly Internalize Contexts https://t.co/bDqLdqhmB9 https://t.co/UOHnPZ8sfO

Media 1Media 2
πŸ–ΌοΈ Media
C
ch402
@ch402
πŸ“…
Feb 24, 2026
18d ago
πŸ†”13964309

@togelius I wasn't familiar with that one, but this idea has definitely been in the literature for a while. We actually actively disclaim credit for the general idea. https://t.co/wV8l0WOsjM

Media 1
πŸ–ΌοΈ Media
πŸ”ch402 retweeted
A
Anthropic
@AnthropicAI
πŸ“…
Feb 26, 2026
16d ago
πŸ†”75528261

A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War. https://t.co/rM77LJejuk

Media 1
❀️55,570
likes
πŸ”9,439
retweets
πŸ–ΌοΈ Media
T
thebasepoint
@thebasepoint
πŸ“…
Feb 28, 2026
14d ago
πŸ†”33881682

For those wondering how mass domestic surveillance could be consistent with "all lawful use" of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): "...to identify ever person who attended a protest" https://t.co/GsfWtmSKvd

Media 1
πŸ–ΌοΈ Media
W
WilliamBarrHeld
@WilliamBarrHeld
πŸ“…
Jan 19, 2026
54d ago
πŸ†”75014309

2026 aesthetic: stable scaling runs https://t.co/HnalPzYKJk

Media 1Media 2
πŸ–ΌοΈ Media
W
wen_kaiyue
@wen_kaiyue
πŸ“…
Jan 21, 2026
52d ago
πŸ†”78519906

(1/n) Introducing Hyperball β€” an optimizer wrapper that keeps weight & update norm constant and lets you control the effective (angular) step size directly. Result: sustained speedups across scales + strong hyperparameter transfer. https://t.co/1vRMHgZgoX

Media 1
πŸ–ΌοΈ Media
M
moo_jin_kim
@moo_jin_kim
πŸ“…
Jan 24, 2026
49d ago
πŸ†”31630241

We release Cosmos Policy πŸ’«: a state-of-the-art robot policy built on a video diffusion model backbone. - policy + world model + value function β€” in 1 model - no architectural changes to the base video model - SOTA in LIBERO (98.5%), RoboCasa (67.1%), & ALOHA tasks (93.6%) πŸ§΅πŸ‘‡ https://t.co/cz9L3ziJ6x

πŸ–ΌοΈ Media
P
percyliang
@percyliang
πŸ“…
Feb 12, 2026
30d ago
πŸ†”52444999

If you’re excited about creating simulation technology to transform decision-making, come join us: https://t.co/6DpuZuip9r

Media 1
πŸ–ΌοΈ Media
T
togethercompute
@togethercompute
πŸ“…
Feb 25, 2026
17d ago
πŸ†”76368879

We’re open-sourcing CoderForge-Preview β€” 258K test-verified coding-agent trajectories (155K pass | 103K fail). Fine-tuning Qwen3-32B on the passing subset boosts SWE-bench Verified: 23.0% β†’ 59.4% pass@1, and it ranks #1 among open-data models ≀32B parameters. Thread on the data generation pipeline 🧡

Media 1
πŸ–ΌοΈ Media
K
kenziyuliu
@kenziyuliu
πŸ“…
Feb 26, 2026
16d ago
πŸ†”37663259

Can we build a blind, *unlinkable inference* layer where ChatGPT/Claude/Gemini can't tell which call came from which users, like a β€œVPN for AI inference”? Yes! Blog post below + we built it into open source infra/chat app and served >15k prompts at Stanford so far. How it helps with AI user privacy: # The AI user privacy problem If you ask AI to analyze your ChatGPT history today, it’s surprisingly easy to infer your demographics, health, immigration status, and political beliefs. Every prompt we send accumulates into an (identity-linked) profile that the AI lab controls completely and indefinitely. At a minimum this is a goldmine for ads (as we know now). A bigger issue is the concentration of power: AI labs can easily become (or asked to become) a Cambridge Analytica, whistleblow your immigration status, or work with health insurance to adjust your premium if they so choose. This is a uniquely worse problem than search engines because your average query is now more revealing (not just keywords), interactive, and intelligence is now cheap. Despite this, most of us still want these remote models; they’re just too good and convenient! (this is aka the "privacy paradox".) # Unlinkable inference as a user privacy architecture The idea of unlinkable inference is to add privacy while preserving access to the remote models controlled by someone else. A β€œprivacy wrapper” or β€œVPN for AI inference”, so to speak. Concretely, it’s a blind inference middle layer that: (1) consists of decentralized proxies that anyone can operate; (2) blindly authenticates requests (via blind signatures / RFC9474,9578) so requests are provably sandboxed from each other and from user identity; (3) relays prompts over randomly chosen proxies that don’t see or log traffic (via client-side ephemeral keys or hosting in TEEs); and (4) the provider simply sees a mixed pool of anonymous prompts from the proxies. No state, pseudonyms, or linkable metadata. If you squint, an unlinkable inference layer is essentially a vendor for per-request, anonymous, ephemeral AI access credentials (for users or agents alike). It partitions your context so that user tracking is drastically harder. Obviously, unlinkability isn’t a silver bullet: the prompt itself still goes to the remote model and can leak privacy (so don't use our chat app for a therapy session!). It aims to combat *longitudinal tracking* as a major threat to user privacy, and its statistical power increases quickly by mixing more users and requests. Unlinkability can be applied at any granularity. For an AI chat app, you can unlinkably request a fresh ephemeral key for every session so tracking is virtually impossible. # The Open Anonymity Project We started this project with the belief that intelligence should be a truly public utility. Like water and electricity, providers should be compensated by usage, not who you are or what you do with it. We think unlinkable inference is a first step towards this β€œintelligence neutrality”. # Try it out! It’s quite practical - Chat app β€œoa-chat”: https://t.co/ELf8LvxFzX (<20 seconds to get going) - Blog post that should be a fun read: https://t.co/OwFmyFlZH5 - Project page: https://t.co/Swerz1xDE2 - GitHub: https://t.co/38CeKajCy2

Media 1Media 2
+1 more
πŸ–ΌοΈ Media
Q
quitchatgpt
@quitchatgpt
πŸ“…
Feb 28, 2026
15d ago
πŸ†”00489800

‼️ BREAKING: OpenAI just agreed to the #Pentagon's CORRUPT deal to use AI for killer robots and mass surveillance. #QuitGPT and SWITCH TO #CLAUDE NOW to support Anthropic's red lines and PUNISH #OpenAI for caving to the Department of War. JOIN THE BOYCOTT https://t.co/4GeAAjTRS4

Media 1
πŸ–ΌοΈ Media
G
GaryMarcus
@GaryMarcus
πŸ“…
Feb 28, 2026
14d ago
πŸ†”39025702

https://t.co/Mdp16zqWel – check it out!

Media 1
πŸ–ΌοΈ Media
G
GaryMarcus
@GaryMarcus
πŸ“…
Feb 28, 2026
14d ago
πŸ†”58785859

@tooquickto During the election, Trump claimed he would end the war in 24 housr; getting US out was (i thought) implicit. nice review of his actual quotes here: https://t.co/gppBDlK78i I am not myself at all advocating for a diminishment of support for πŸ‡ΊπŸ‡¦

Media 1
πŸ–ΌοΈ Media
G
GaryMarcus
@GaryMarcus
πŸ“…
Feb 28, 2026
14d ago
πŸ†”39544375

check out https://t.co/GKmFW3IzwT

Media 1
πŸ–ΌοΈ Media
K
krassenstein
@krassenstein
πŸ“…
Feb 28, 2026
15d ago
πŸ†”03104204

BREAKING: Axios reports that the Pentagon has agreed to OpenAI's rules for deploying its technology. And that the conditions are similar to those put forth by Anthropic, which Trump rejected. Hmm I wonder why… β€”&gt; https://t.co/Bgp2OdWUKf

Media 1
πŸ–ΌοΈ Media
G
GaryMarcus
@GaryMarcus
πŸ“…
Feb 28, 2026
14d ago
πŸ†”03284692

@cihatkaya making stuff up always feels good in the moment. do some research next time, please. eg https://t.co/8xYG8M61c5

Media 1
πŸ–ΌοΈ Media
K
klara_sjo
@klara_sjo
πŸ“…
Feb 28, 2026
14d ago
πŸ†”65913281

This is the AI that will be taking our jobs https://t.co/nycRqJimm6

πŸ–ΌοΈ Media
B
BlackHC
@BlackHC
πŸ“…
Feb 28, 2026
14d ago
πŸ†”28971383

I'm speechless at OpenAI releasing that contract excerpt and acting as if there aren't gaping holes that could be exploited far beyond their stated "red lines." I'm not a lawyer, but this is pretty obvious and common sense. (And to be clear: if Google had signed the same deal, I'd be saying the same thing internally. The issues here are bigger than friendly competition between companies.) OpenAI's "red lines" are: no mass domestic surveillance, no directing autonomous weapons, and no high-stakes automated decisions. They argue their cloud-only deployment + safety stack + cleared OpenAI personnel "in the loop" make violations impossible. They also claim the contract references the relevant laws/policies "as they exist today" so future changes won't weaken the standards. But the actual language they published is still full of obvious escape hatches. This is why Anthropic refusing to sign makes sense. Reporting on the Anthropic–"DoW"/Pentagon standoff described them saying the proposed contract language was framed as compromise but paired with "legalese that would allow safeguards to be disregarded at will." You don't need to agree with Anthropic on everything to see what they're reacting to: language that sounds like ethics but cashes out as essentially "subject to whatever the government decides later." ## Autonomous weapons The problem is that the restriction is conditional: it depends on what "law/regulation/policy requires human control" for. If policy definitions are weak (or later revised), the contract language itself doesn't read like a durable "no autonomous weapons" ban. It reads like "we'll follow whatever the current regime says requires human control." OpenAI says elsewhere that the agreement "locks in" today's standards even if laws/policies change. If that "freeze" clause is real and enforceable, sure, but it's not visible in the excerpt itself, so the excerpt alone doesn't justify the level of confidence they're projecting. ## "High-stakes decisions" Same loophole. This forbids only decisions that already require human approval under whatever authorities apply. If a decision doesn't formally require approval (or can be reclassified/reshaped), the clause doesn't obviously prohibit automation of the step that matters. ## Surveillance "directives," "purpose," and "unconstrained" are squishy on purpose: "DoD directives" aren't laws; they're internal policy. That matters because we have real precedents for administrations leaning on aggressive internal legal/policy interpretations as a shield until courts/politics catch up. If you think "secret memos" is alarmist, look at the pattern: 1. Reporting in early 2026 described a previously hidden DHS/ICE legal memo position asserting warrantless/forced home entry under certain circumstances, which is the kind of internal-lawyer move that tends to get written, circulated, and only later litigated and retracted. 2. And historically, the Bush-era OLC torture memos are the canonical example of "legalistic compromise" that later turned out to be a moral and legal disaster. (You don't have to litigate the details to make the point: internal legalese can be used to launder outcomes.) "Unconstrained" is not a real safeguard. Surveillance can be huge while still "constrained" by selectors, categories, time windows, or a stated "foreign intelligence purpose." And it only covers private information, so not the massive world of public data that can still be used for profiling, targeting, and "pattern-of-life" analysis at scale. ## Domestic law enforcement > shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law. This is not a hard prohibition. "except as permitted" is not a ban. It's a permission for exceptions, and "other applicable law" is an open-ended bucket by design. If you want a concrete, recent example: the Associated Press reported that formal orders extended the Washington, D.C. National Guard deployment through Feb. 28, 2026, to protect federal property/functions and to support federal and D.C. law enforcement. That's exactly the sort of "domestic deployment supporting law enforcement" scenario where this clause stops sounding like a "red line" and starts sounding like legal throat-clearing. ## "Cloud-only / no edge deployment prevents autonomous weapons" rings false OpenAI's own argument is: cloud-only (no edge devices) means you can't power autonomous weapons. But that's not convincing. You don't need GPT-5.2 running on the missile. You can use a cloud model for high-level decision-making (tasking, prioritization, target recommendation, mission planning) over a satellite link (Starlink or otherwise), while a separate local system handles actual guidance and execution. High latency is totally compatible with "strategic / operational" autonomy while still enabling lethal outcomes. Once the pattern exists, "additional safety layers" are a policy choice and implementations change, exceptions get made, but today's contract language tends to get "grandfathered" into tomorrow's contract template. So layered safeguards can reduce risk today, but the contract language itself is exactly the kind of "looks strict, bends easily" compromise that becomes precedent. And creating precedent is the real problem here.

Media 1Media 2
+2 more
πŸ–ΌοΈ Media