
Qwen
Key Highlights:
- AI Can Imitate Leadership Communication: Artificial Intelligence can now write emails and messages that closely mimic a CEO’s personal style, tone, and language.
- The “Wade Test”: A modern twist on the Turing Test, showing that many employees can’t tell the difference between messages written by a real CEO or an AI trained to sound like them.
- Employees Doubt AI Messages: Even when AI writes convincingly, workers often find the responses less helpful if they believe the message came from a machine.
- Algorithm Aversion Exists: People tend to trust human-written messages more than AI-generated ones—even when the content is the same.
- Efficiency vs. Trust: While AI can save time for busy leaders, building employee trust in AI communication remains a major challenge.
- Future of Work: Personal AI chatbots for every employee may be coming, but acceptance depends on solving credibility issues.
Artificial Intelligence (AI) can help leaders write emails and messages that sound like they were written by a real person. But just because the message sounds human doesn’t mean employees will believe it or follow it.
A recent study led by Prithwiraj Choudhury from Harvard Business School looked at whether a chatbot could act like a company’s CEO when answering employee questions. The results showed that while the AI could write convincingly, employees didn’t always trust the answers if they thought the response came from a machine—even if it was actually written by their boss.
A New Kind of Test
This experiment is like a modern version of the “Turing Test,” a famous idea created by British scientist Alan Turing in 1950. The Turing Test checks if a machine can trick people into thinking it’s human. In this new test—called the “Wade Test” after the CEO studied—researchers wanted to see if an AI could copy the way a real CEO writes.
They trained an AI on all of the CEO’s past emails and messages. The AI learned his style, including how he uses words, punctuation, grammar, and even small mistakes. Then, they asked the real CEO and the AI to both answer the same questions.
Who Can Tell the Difference?
The company had 800 employees, and 105 took part in the experiment. They were shown 10 answers and asked to guess which ones came from the real CEO and which ones were from the AI bot. Most of these workers had been with the company for at least three years.
The result? Employees guessed correctly only about 59% of the time. That’s not much better than random guessing. Some couldn’t tell the difference between the real CEO and the AI.
Trust Is Still a Problem
Even though the AI wrote convincing answers, employees said the responses felt less helpful when they thought they came from a machine. This shows something called “algorithm aversion”—people tend to distrust advice from machines more than from humans, even if the machine does a good job.
In a second test, researchers gave people answers to business questions. Some answers were labeled as coming from AI, others from real CEOs. When people thought the answer was from AI, they rated it as less helpful—even if it was actually written by a real CEO.
What Does This Mean for the Future?
Choudhury believes that one day, everyone might have their own personal AI assistant for writing messages—just like we all use email today. But before that happens, companies need to solve the problem of making sure people trust AI-written communication.
AI can save time for busy bosses by handling routine messages. But unless people believe what the AI says, its usefulness in the workplace may be limited.
About The Author
Discover more from BBN.news
Subscribe to get the latest posts sent to your email.