@gerardsans
“Move fast and break things” worked for apps. It is deadly for AI. What Canada’s case shows and why UK policy choices matter now. 1/ In Canada, internal systems flagged a person’s violent conversations with an AI months before a mass shooting. Police were not notified. Multiple people, including children, died. 2/ This reveals a gap in corporate responsibility when law doesn’t require companies to act on serious risk. 3/ In the US there is still no comprehensive federal AI safety law. Self‑regulation and speed often come first, even for generative systems. 4/ In the EU the AI Act and Digital Services Act create legal duties for “high‑risk” AI systems and platform safety, including incident reporting and transparency that can carry real penalties. 5/ Same technology, two very different legal realities. 6/ The UK is still deciding which path to take. We do not have to adopt Silicon Valley’s “disrupt first, sort out harm later” model, nor merely copy the lax US approach. 7/ Canada’s tragedy shows the cost of weak oversight. The EU’s regulatory frameworks show that strong rules designed to keep people safe do exist. 8/ UK policymakers @OfficeforAI @SciTechgovuk @Ofcom must act now with safety and accountability baked in. 9/ US federal AI policy voices like @FTC and broader US AI debate also influence global norms. 10/ EU enforcement and AI policy engagement by @EU_Commission @EUAI_Office show one workable alternative. Conclusion: Don’t let the first mover set global norms for everyone else. UK policy choices can protect people, not just placate Big Tech. Act now on AI safety, reporting and accountability. https://t.co/xdbvvcjMWP #AI #AIRegulation #SafetyNotSpeed #UKTech #AIAct #AIpolicy