@jayedelson
This week, Sam Altman asked the world for sympathy over threats to his home. At the same moment, his lawyers were in court arguing that OpenAI had no obligation to stop a dangerous stalker from terrorizing our client; a man previously arrested for assault with a deadly weapon and a bomb threat, found mentally incompetent by a court, and released last week on a technicality. Even though OpenAI's own systems had flagged his conversations for "mass casualty" activity, the company argued it wouldn't shut down his accounts while authorities searched for him. It also argued that the chatlogs, which could identify who else is in danger and how he may be planning to act, should not be turned over. OpenAI made these arguments in the wake of Tumbler Ridge, FSU, and Soelberg, three tragedies now linked to ChatGPT-assisted murder. Today, a court disagreed. The chatlogs will be turned over and he will be kept off the platform. We are thankful for the court's ruling and remain stunned by OpenAI's lack of human decency. No one should have to go to court to get a company to take "mass casualty" seriously. https://t.co/fmh92Ip9Ej