@ESYudkowsky
The point of this proposal is not that an AGI will never think of stuff if you don't tell them. It's to probe whether or not the *current* AIs are smart enough to think of instrumentally convergent strategies, yet, if we don't tell them. Science, not safety.