-
Three Things to Know About Implementing Workplace AI Tools
AI tools can disrupt workflows, reduce performance, and flatter users’ judgment. Here’s what managers need to know before deployment.
Listen to “Three Things to Know About Implementing Workplace AI Tools” (03:59)
AI tools are proliferating as organizations continue to invest in their deployment, but how effective are they at increasing employee performance? Three recent research articles suggest that although AI can surely be useful in the workplace, its limitations are significant and its consequences for human dynamics unclear. Before implementing AI tools, managers should determine how they might help or hinder employees’ performance.
1. AI systems can disrupt workflows and reduce performance. A multiyear study tracked 72 sales experts across 12 business units at a multinational pharmaceutical company that introduced an AI-based system to provide salespeople with recommendations of sales targets and objectives. The study found that salespeople who were provided with an AI tool that was tailored to their cognitive style sold significantly more than before. Those who used an untailored AI system generally saw it as interfering with their work processes, which made them use it less and resulted in a decline in their sales performance.
Although the study period concluded in 2017, before the release of LLM-based AI tools, these findings suggest that leaders ought to take a human-centered view when assessing how the implementation of AI could complicate, not complement, their employees’ preferred work processes. As one study participant using the untailored tool commented, “We got this super tool, and at the same time, I felt like [I was] in prison. There was no freedom to work the way I wanted to work.”
2. Human-AI combinations make poorer decisions than either one alone. A systematic review and meta-analysis of 106 studies that evaluated the performance of humans alone, AI alone, and human-AI combinations found that, on average, humans or AI alone significantly outperformed human-AI combinations. These combinations did show some performance gains associated with content creation tasks and on tasks where humans alone outperformed AI on its own, but there were performance losses associated with decision-making tasks.
Contrary to claims about the complementarity of AI and humans resulting in superior performance, the reality turns out to be more complicated. The researchers recommend that “to effectively use AI in practice, it may be just as important to design innovative processes for how to combine humans and AI as it is to design innovative technologies.”
3. AI tends to affirm or even flatter users’ judgment. A study of 800 participants using 11 AI models confirmed that large language models, increasingly used for personal and relationship advice, tend toward sycophancy — affirming the user’s actions and perspectives 50% more than other humans do. And when users discussed an interpersonal conflict with an AI model, those interactions increased users’ conviction of their own correctness and reduced their willingness to repair those relationships.
These findings in tandem suggest that managers should discourage the use of AI for making decisions or judgment calls because the tools may fail to improve performance or may even validate unwise choices. Perversely, sycophantic AI tools incentivize users to rely on them — and for AI models to be trained to offer more sycophantic responses. While AI is proving to be an excellent tool for some workplace tasks, encouraging prosocial behavior is emphatically not one of them.
