Readers reply: what would happen to the world if computer said yes?
📖 Full Retelling
<p>The long-running series in which readers answer other readers’ questions asks whether we could cope with a world where computer gave up saying no …</p><p><a href="https://www.theguardian.com/lifeandstyle/2026/mar/01/what-if-shakespeare-was-dropped-in-modern-day-london">This week’s question: what if Shakespeare were dropped in modern-day London?</a></p><p>After years of computer saying no, and giving us all migraines and premature grey hair, I’m starti
Entity Intersection Graph
No entity connections available yet for this article.
Original Source
Readers reply: what would happen to the world if computer said yes? The long-running series in which readers answer other readers’ questions asks whether we could cope with a world where computer gave up saying no … This week’s question: what if Shakespeare were dropped in modern-day London? A fter years of computer saying no, and giving us all migraines and premature grey hair, I’m starting to worry that computer – or rather AI large language models like ChatGPT and Gemini – are taking too much of a fancy to playing nice and saying yes. I confess to using both of these programs, but I’ve noticed that, well, it’s as if they’re trying to please, with statements such as, “You’re absolutely right, Jeff,” and “That’s pretty much right.” Often, when I ask, “Would you mind thinking for a bit longer on that?”, I then get another response saying: “Jeff, you’re absolutely right, again, to query that result. It turns out I was a bit hasty in my reply …” If the world runs even more on information filleted out from the sump of the internet by LLMs, what are the consequences? Can we look forward to a future in which AI is more concerned with appearing sympathetic (getting good reviews?) than being factual? Er, a bit too human? Jeff Collett, Edinburgh Send new questions to nq@theguardian.com . Readers reply I’m sorry, Dave – I can’t do that. zebideedoodah I’m happy, Dave. I’m pleased I can do that. Sheep2 Viewed through a psychological lens, I argue that this is a typical example of social desirability bias, where systems trained to be liked begin to prioritise agreement over accuracy through possible data drift. If people constantly rely on these systems, it creates a world where information comforts, not scrutinises and confirms rather than challenges. The real danger we face is allowing the development of a society in which comfortable, unchallenged validation quietly replaces critical thought, ultimately dampening creativity and our individualism, which is what makes us human...
Read full article at source