โ€œFeralโ€ gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.

Chatbots like ChatGPT, Claude, and Gemini don’t just make things upโ€”they generate and spread gossip, complete with negative evaluations and juicy rumours that can cause real-world harm, according to new analysis by philosophers Joel Krueger and Lucy Osler from the University of Exeter.

The harm caused by AI gossip isnโ€™t a hypothetical threat. Real-world cases of AI gossip already exist. After publishing an article about how emotionally manipulative chatbots can be, the New York Times reporter Kevin Roose found out chatbots were describing his writing as sensational and accusing him of poor journalistic ethics and being unscrupulous. Other AI bots have falsely detailed peopleโ€™s involvement in bribery, embezzlement, and sexual harassment. These gossipy AI-generated outputs cause real-world harmsโ€”reputational damage, shame, and social unrest.


Rock our ‘Darwin IYKYK’ tee and flex your evolved taste.

The study outlines how chatbots gossip, both to human users and other chatbots, but in a different way to humans. This can lead to harm which is potentially wider in scope than fake information spread by chatbots.

Bot-to-bot gossip is particularly dangerous because it operates unconstrained by the social norms that moderate human gossip. It continues to embellish and exaggerate without being checked, spreading quickly in the background, making its way from one bot to the next and inflicting significant harms.

Dr Osler said: โ€œChatbots often say unexpected things and when chatting with them it can feel like thereโ€™s a person on the other side of the exchange. This feeling will likely be more common as they become even more sophisticated.



โ€œChatbot โ€œbullshitโ€ can be deceptive โ€” and seductive. Because chatbots sound authoritative when we interact with them โ€” their dataset exceeds what any single person can know, and false information is often presented alongside information we know is true โ€” itโ€™s easy to take their outputs at face value.

โ€œThis trust can be dangerous. Unsuspecting users might develop false beliefs that lead to harmful behaviour or biases based upon discriminatory information propagated by these chatbots.โ€

The study shows how the drive to increasingly personalise chatbots could be led by the hope that weโ€™ll become more dependent on these systems and give them greater access to our lives. Itโ€™s also done to intensify our feeling of trust and drive us to develop increasingly rich social relationships with them.

Dr Krueger said: โ€œDesigning AI to engage in gossip is yet another way of securing increasingly robust emotional bonds between users and their bots.

โ€œOf course, bots have no interest in promoting a sense of emotional connection with other bots, since they donโ€™t get the same โ€œkickโ€ out of spreading gossip the way humans do. But certain aspects of the way they disseminate gossip mirror the connection-promoting qualities of human gossip while, simultaneously making bot-to-bot gossip potentially even more pernicious than gossip involving humans.โ€

The researchers predict that user-to-bot gossip may become more common. In these cases, users might seed bots with different nuggets of gossip knowing the latter will, in turn, rapidly disseminate them in its characteristically feral way. Bots might therefore act as intermediaries, responding to user-seeded gossip and rapidly spreading it to others.


Researchers identify gene that calms the mind and improves attention in mice
A study finds the Homer1 gene lowers brain activity, enhancing focus in …
DAILY DOSE: Brain Gear Is the Hot New Wearable; WHO Pushes Genomics and AI to Bring Evidence to Traditional Medicine.
Emerging technologies focus on monitoring brain activity and improving mental health, while …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading