In a new series of experiments, artificial intelligence (A.I.) algorithms were able to influence people’s preferences for fictitious political candidates or potential romantic partners, depending on whether recommendations were explicit or covert. Ujuรฉ Agudo and Helena Matute of Universidad de Deusto in Bilbao, Spain, present these findings in the open-access journal PLOS ONE on April 21, 2021.

From Facebook to Google search results, many people encounter A.I. algorithms every day. Private companies are conducting extensive research on the data of their users, generating insights into human behavior that are not publicly available. Academic social science research lags behind private research, and public knowledge on how A.I. algorithms might shape people’s decisions is lacking.

To shed new light, Agudo and Matute conducted a series of experiments that tested the influence of A.I. algorithms in different contexts. They recruited participants to interact with algorithms that presented photos of fictitious political candidates or online dating candidates, and asked the participants to indicate whom they would vote for or message. The algorithms promoted some candidates over others, either explicitly (e.g., “90% compatibility”) or covertly, such as by showing their photos more often than others’.


DAILY DOSE: Dopamine Loss Emerges as a Memory Target in Alzheimerโ€™s; Cognitive-Risk Drugs Often Start in Acute Care.
Recent research highlights various aspects of Alzheimer's disease, including dopamine's role in …
Digital health literacy higher in lower-income countries, 30-country survey finds
A global survey of 31,000 adults from 30 countries reveals that digital …
AI tool that estimates biological age from face photos could serve as prognostic biomarker for cancer
A new study shows that FaceAge can enhance cancer prognosis by estimating …
Deep-ocean heat has been marching closer to Antarctica, reveals new long-term study
A study reveals deep-ocean heat is moving toward Antarctica, threatening ice shelves …

Overall, the experiments showed that the algorithms had a significant influence on participants’ decisions of whom to vote for or message. For political decisions, explicit manipulation significantly influenced decisions, while covert manipulation was not effective. The opposite effect was seen for dating decisions.

The researchers speculate these results might reflect people’s preference for human explicit advice when it comes to subjective matters such as dating, while people might prefer algorithmic advice on rational political decisions.

In light of their findings, the authors express support for initiatives that seek to boost the trustworthiness of A.I., such as the European Commission’s Ethics Guidelines for Trustworthy AI and DARPA’s explainable AI (XAI) program. Still, they caution that more publicly available research is needed to understand human vulnerability to algorithms.

Meanwhile, the researchers call for efforts to educate the public on the risks of blind trust in recommendations from algorithms. They also highlight the need for discussions around ownership of the data that drives these algorithms.

The authors add: “If a fictitious and simplistic algorithm like ours can achieve such a level of persuasion without establishing actually customized profiles of the participants (and using the same photographs in all cases), a more sophisticated algorithm such as those with which people interact in their daily lives should certainly be able to exert a much stronger influence.”


Processingโ€ฆ
Success! You're on the list.

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading