Your curated collection of saved posts and media
π€ GitHub Copilot Dev Days is coming! From March 15 to May 15, developer communities worldwide will host free, hands-on events exploring GitHub Copilot with @code, the CLI, .NET, Java, Python, JavaScript, and more. π Find an event near you: https://t.co/O7dceTTCqe https://t.co/wyh0EA3xRv
Design β code β canvas β feedback β repeat. The @figma MCP server is now bidirectional. @GitHub Copilot users can pull design context into code and push working UI back to the Figma canvas, all from @code . No handoffs or context switching. Just flow. https://t.co/FbDcp7kboG
By the power of @ComfyUI and @Alibaba_Wan βοΈπ https://t.co/l9sEjzRKx9
@cgtwts Tired of AI consciousness hype? Run the Lisbon Effect:β¨βBest footballer?β β Messi.β¨ββ¦my Lisbon friend asked?β β Ronaldo. β¨No sentience, just context drift. Anthropicβs desperate. Donβt buy it. If you ever vibe coded you already know. β¨#LisbonEffect #JustPatternMatching https://t.co/Ffl5iAKKrp
@TheBasicExpert1 Tired of AI consciousness hype? Run the Lisbon Effect:β¨βBest footballer?β β Messi.β¨ββ¦my Lisbon friend asked?β β Ronaldo. β¨No sentience, just context drift. Anthropicβs desperate. Donβt buy it. If you ever vibe coded you already know. β¨#LisbonEffect #JustPatternMatching https://t.co/r4VWiPd1yt
@Kekius_Sage Tired of AI consciousness hype? Run the Lisbon Effect:β¨βBest footballer?β β Messi.β¨ββ¦my Lisbon friend asked?β β Ronaldo. β¨No sentience, just context drift. Anthropicβs desperate. Donβt buy it. If you ever vibe coded you already know. β¨#LisbonEffect #JustPatternMatching https://t.co/M7q7dFJjGp
New research from Microsoft. Phi-4-reasoning-vision-15B is a 15-billion parameter multimodal reasoning model that combines visual understanding with structured reasoning capabilities. As I have been saying, not every agent task needs a frontier model. Phi-4-reasoning-vision shows what's possible at 15B parameters. The report details how they trained a compact model that can reason over both text and images, targeting the sweet spot between capability and efficiency. Smaller reasoning models that handle vision are essential for practical agent deployments. Paper: https://t.co/cT2qeNImwi Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
NEW: Microsoft releases Phi-4-reasoning-vision-15B, a 15B parameter multimodal reasoning model.
Most builders go from requirements straight to code. Then they spend days adjusting layouts, fixing flows, and rebuilding things that should have been caught earlier. Today we are shipping Designs in BrainGrid, a new way to visualize your app before you build it. Start with a prompt. Get a design tied to your requirement. Iterate by chatting with the agent, annotating what needs to change, or selecting individual elements for precision edits. Desktop and mobile views are there from the start. No surprises when you go to build. The gap between "what I described" and "what got built" is where time disappears. Designs closes that gap.
No more manually pulling data. We gave Perplexity Computer a simple prompt and a free Federal Reserve API key. Minutes later: a fully formatted Excel spreadsheet with live macro indicators and charts. https://t.co/HXLI3LptUy
What did we get done this week? 1. Voice Mode in Computer (Jarvis) 2. Skills in Computer 3. Model Council in Computer 4. GPT-5.3-Codex coding subagent in Computer 5. GPT-5.4 and GPT-5.4 Thinking (inside Perplexity and as orchestrator model in Computer) https://t.co/mrmMURIe3Y
On most games, performance is flat or even decreasing. What went wrong? Using classic NLP, we find AI models suffer from low discourse coherence, leading to weak performance despite relatively high information density - even when using twice as many tokens as humans. https://t.co/piUFPWyLnO

Humans communicate through language and interact with the world through vision, yet most multimodal models are language-first. What happens when we go beyond language? π€ Beyond Language Modeling: a deep dive into the design space of truly native multimodal models Paper: https://t.co/KOpmL1PItn Project: https://t.co/Oy6XuEtUAi

@DrBeavisAI @fchollet Touch is as high bandwidth as vision. Unlike vision, touch is absolutely necessary for survival.
Mojoπ₯ has always had "peak perf" and "access to full power of the GPU"... but many want "peak perf" with high level code. "Structured Kernels" are simple and composable APIs that increase the usability of kernel programming - without losing perf, and with no template errors.
PyTorch at the micro-edge? Yes. See how ExecuTorch brings PyTorch models to Arm microcontrollersβquantized, compiled, and running on a Corstone-320 + Ethos-U NPU (via FVP). π https://t.co/plbiWAAu85 From training to deployment, end to end. #PyTorch #ExecuTorch #EdgeAI #TinyML #Arm
Want to prototype Multimodal VLM with Kimi K2.5 on GPU-accelerated endpoints? NVIDIA NeMo AutoModel is a PyTorch distributed native training library within the NeMo Framework that provides a lightweight and flexible tool for developers and researchers to do rapid experimentation on the latest frontier models. π Read the full post: https://t.co/lSSwJ4XSoF #PyTorch #OpenSourceAI #AI #Inference #Innovation
The Codex app is now live on Windows. The app runs both natively and in WSL, with integrated terminals for PowerShell, Command Prompt, Git Bash, or WSL. We also built the first Windows-native agent sandbox β using OS-level controls to block filesystem writes outside your working folder and prevent outbound network access unless you explicitly approve it. Plus: 7 new βOpen in β¦β apps and 2 new Windows skills (WinUI + https://t.co/r7nDJ6PFcc). Try it and tell us what you think.

Since last year, I've arguably been wrongfully accused in a state corruption case. To defend my innocence, I spent past 6 weeks building an agentic AI swarm that: Analyzed 4700+ pages court docs Mapped 8900+ testimonies Found dozens of contradictions This is how I fight ππΌ First off, some context may be necessary. Even though I'm accused in a state corruption case, I'm not a government official. I'm a software engineer. I spent over 15 years building large-scale tech systems across Europe and Indonesia. I've led engineering teams of up to 600 people and helped grow a small tech startup into a unicorn. In 2016, I moved back from Europe to Indonesia, because I believe technology at scale could make a real difference to the millions of people in the nation. Six years ago, working as a tech consultant under a nonprofit foundation, I started advising Indonesia's Ministry of Education on building large-scale technology platforms. Public sector work pays significantly less than private sector, and I took close to a 50% pay cut to make the switch. I was fine with that. Using what I knew to help underserved communities in Indonesia felt like the right trade. Our mission was to build a user-centric superapp for public education, specifically for teachers and public schools, the kind of work the private sector ignores because there's no money in it. At some point, officials at the ministry asked for my input on one of their procurement plans. I helped them work through the technical details, shared what I knew, laid out the pros and cons, and recommended a set of tests they should run to determine which options were the most suitable. By the time they made their final decision and executed the procurement, I had already resigned from the consulting work, so I didn't think much of it. Fast forward to May 2025. My house was raided as part of a newly opened corruption investigation tied to that procurement. Two months later, I was named a suspect and placed under city detention due to my health. The trial started in January 2026. We've been through more than a dozen sessions so far, and not a single piece of evidence or testimony has been presented showing I received a single cent from the procurement. What came to light was the opposite: evidence and testimony that my recommendations were neutral and likely were ultimately ignored by the ministry's own team, who went ahead and made the call on their own. So why am I the one on trial? Because the ministry officials who did take money from the procurement vendors needed someone to blame for the decisions they made. Blaming an outside consultant is the easy way out. Witness testimonies in court has shown that the officials actively directed the procurement while claiming it was done on my instructions and even misled their own team within the ministry by saying I held a position of authority. We needed evidence to dispute those accusations, questions to cross-examine the witnesses, and we needed them fast. This is where my AI comes in. A few days before the trial began, we received a 4400-page printed document containing all the witness statements collected during the investigation, plus several hundred pages of other related documents. The information asymmetry is staggering. Those with deep enough pockets to hire large law firms can throw dozens of paralegals and associates at a document like that and mount a proper defense on short notice. I didn't have that kind of money. By then, I had been out of work for more than six months. The AI startup I founded had to shut down. Our investors asked us to return their funding. I had to lay off the entire team. Most of my lawyers are friends of my wife from her college days, who stepped up and waived most of their fees because they could see I was being railroaded. The whole situation felt hopeless. But somewhere in the middle of the despair, a spark lit up. Combing through and analyzing thousands of pages of documents is exactly the kind of problem AI was built for. I've built AI systems before, so I know the key to applying AI to a real-world problem is understanding the strengths and limitations of the available models, and figuring out how to make things not just work, but work efficiently enough to put into production. I was placed under city detention due to health issues with my heart, compounded by a tumor that has been growing rapidly over the past few months. But it also means I still have access to my dev PC. So I started with small experiments. My lawyers found a printing service that could scan the thousands of pages in a couple of days. At first, I tried simply uploading the scanned PDF into existing chatbots like ChatGPT, but the file was far too large for anything they could handle. Even when I managed to get it working through external cloud storage, the results were atrocious. Half of the strategies and "facts" the models surfaced were hallucinations. That wouldn't just be useless in court, it's actively dangerous and can jeopardize my defense. My experience building complex AI systems told me that the key to reducing those hallucinations is better data preprocessing. So I spent the first couple of weeks focusing on parsing the uploaded PDFs, running various kinds of text extraction, and eventually settled on building an agentic AI swarm that performs multiple layers of preprocessing and analysis. This multi-step analysis by several AI agents that swarm the PDF and extract different aspects of the case produces a dense knowledge graph where we can even trace the flow of money involved. My lawyers can now easily browse, filter, and search through nearly 9000 witness statements. We even discovered several witnesses with duplicate testimony, raising suspicion of coordinated efforts or tampering among them. But I didn't stop there. The processing chain includes several higher-level intelligence layers that draw from all the signals in the extracted knowledge graph. These layers add semantic understanding that powers a Chat AI feature, where we can ask specific questions about the case and get grounded answers. I even built a self-reflective sub-agent that automatically challenges and inspects the results to make sure there are zero hallucinations. Overall, the AI has helped me and my legal team uncover the big picture of what actually happened, and build questions that span hundreds of separate testimony sessions, giving us an unprecedented ability to cross-examine witnesses in court and significantly improved our defenses. But I have grander vision than just helping my own legal team. Indonesia's legal system is severely overburdened, with a huge number of cases flowing through the courts every year. This kind of AI could be a useful tool not just for lawyers, but also for judges and prosecutors trying to make sense of their caseloads. With the cross-examinations we've conducted and the weight of evidence that has come to light, we are aiming for an acquittal. Should that be the case, my pledge is to keep building this AI platform into something that can meaningfully improve the quality of justice in our legal system: by helping investigators analyze cases more thoroughly and shine a light on any potential crimes, by raising the standard of what prosecutors bring before a judge, and by giving lawyers the ability to uncover the truth in their clients' cases faster than ever before. Because in the end, I want what I've built to help more than just myself. I believe it can ease the burden on our judges and raise the quality of justice across the system in Indonesia.

People are misunderstanding her. Itβs important to remember that a variant of semantic search is still happening 1. User writes a query 2. You send that query to a LLM with semantic understanding that knows what other tokens in latent space are similar to the query 3. LLM decides to rewrite query as traditional search, Grep, regex and expand the hell out of keywords based on semantica Itβs all about tradeoffs: the order of operations very much depends on the goal and sometimes a different path is optimal 1. Sometimes (Often) you want to do hybrid search 2. Sometimes you want to to show a model interleaved results and have it do multiple rounds of search if itβs truly exploratory 3. Some domains need semantic search way more than others But donβt read into blanket statements on Twitter that X is dead. When you see this it just means βWe arenβt using X in our specific situation because of out tradeoffsβ
i've been working really hard to burn down the tool errors we get in our main chat flow and it's working this is the @HamelHusain effect https://t.co/WG4s8AonsQ
Good tips for better utilizing memory in AI agents.
Couldn't help it! Had to give GPT 5.4 (High) + /fast mode a try. β Added height terrains to the level β Animation tweens for the jumps Used xHigh to solve a gnarly bug with the controls successfully πͺ This Final Fantasy Tactics-inspired game was completely vibe coded! https://t.co/q2K7PovU62
Couldn't help it! Had to give GPT 5.4 (High) + /fast mode a try. β Added height terrains to the level β Animation tweens for the jumps Used xHigh to solve a gnarly bug with the controls successfully πͺ This Final Fantasy Tactics-inspired game was completely vibe coded! https://t.co/q2K7PovU62