@random_walker
I find Anthropic's behavior perplexing. Anyone who does serious research with these models knows that they don't have stable desires or preferences. Tweak the question slightly and get a different answer. Note that this is a simple empirical observation about model behavior, completely separate from the question of whether models are moral agents with preferences worth respecting. Surely people at Anthropic know this. Why do they persist with this wacky stuff?