Organizations using AI to monitor employeesโ behavior and productivity can expect them to complain more, be less productive and want to quit more โ unless the technology can be framed as supporting their development, Cornell University research finds.
Surveillance tools cause people to feel a greater loss of autonomy than oversight by humans, according to the research. Businesses and other organizations using the fast-changing technologies to evaluate employee behaviors should consider their unintended consequences, which may prompt resistance and hurt performance, the researchers say. They also suggest an opportunity to win buy-in, if the subjects of surveillance feel the tools are there to assist rather than judge their performance โ assessments they fear will lack context and accuracy.
โWhen artificial intelligence and other advanced technologies are implemented for developmental purposes, people like that they can learn from it and improve their performance,โ said Emily Zitek, associate professor of organizational behavior. โThe problem occurs when they feel like an evaluation is happening automatically, straight from the data, and theyโre not able to contextualize it in any way.โ

Zitek is the co-author of โAlgorithmic Versus Human Surveillance Leads to Lower Perceptions of Autonomy and Increased Resistance.โ Rachel Schlund is first author.
The researchers conducted four experiments involving nearly 1,200 total participants. In a first study, when asked to recall and write about times when they were monitored and evaluated by either surveillance type, participants reported feeling less autonomy under AI and were more likely to engage in โresistance behaviors.โ
A pair of studies asked participants to work as a group to brainstorm ideas for a theme park, then to individually generate ideas about one segment of the park. They were told their work would be monitored by a research assistant or AIโthe latter represented in Zoom videoconferences as โAI Technology Feed.โ After several minutes, either the human assistant or โAIโ relayed messages that the participants werenโt coming up with enough ideas and should try harder. In surveys following one study, more than 30% of participants criticized the AI surveillance compared to about 7% who were critical of the human monitoring.
Beyond complaints and criticism, the researchers found that those who thought they were being monitored by AI generated fewer ideas โ indicating worse performance.
โEven though the participants got the same message in both cases that they needed to generate more ideas, they perceived it differently when it came from AI rather than the research assistant,โ Zitek said. โThe AI surveillance caused them to perform worse in multiple studies.โ
In a fourth study, participants imagining they worked in a call center were told that humans or AI would analyze a sample of their calls. For some, the analysis would be used to evaluate their performance; for others, to provide developmental feedback. In the developmental scenario, participants no longer perceived algorithmic surveillance as infringing more on their autonomy and did not report a greater intention to quit.
โOrganizations trying to implement this kind of surveillance need to recognize the pros and cons,โ Zitek said. โThey should do what they can to make it either more developmental or ensure that people can add contextualization. If people feel like they donโt have autonomy, theyโre not going to be happy.โ
IMAGE CREDIT: Andrea Piacquadio





Leave a Reply