@rohanpaul_ai
Futurism article: many people now let AI overrule their own judgment, even when the AI is wrong. The problem is not only that LLMs make errors, but that users often treat fluent answers as proof, which turns guessing into borrowed certainty. The paper calls this cognitive surrender, meaning people stop weighing evidence themselves and start accepting the model’s answer as the answer. In one experiment, people followed correct AI advice 92.7% of the time, but still followed wrong advice 79.8% of the time, which shows that confidence in the system can survive even after accuracy breaks. Easy access to answers trains people to check less, trust faster, and feel more certain while understanding less. --- futurism. com/artificial-intelligence/study-do-what-chatgpt-tells-us