It’s all about their lack of “humanity”
The so-called conversational user interface (UI) gives people the ability to interact with digital tools through text or direct communication, which is much more like how we interact with other people.
The problem of “mistaking computers for people” is exacerbated when interacting with artificial agents through conversational user interfaces.
Leonardi cites a study that used AI assistants to provide answers to common business queries. One of the groups knew they were using an AI that pretended to be human. The other is not. The company’s employees, who didn’t know about the AI, regularly got mad at him when he didn’t provide useful answers.
They usually said something like: “He sucks!” or “I expected more from him” when it came to the results produced by the machine.
Most importantly, their strategies for improving relationships with the machine mirrored the strategies they would use with other people in the office.
They asked their question more politely, rephrased it using different words, or tried to strategically time their questions when they thought the agent would be, in the words of one person, “not so busy.” None of these strategies have been particularly successful.
In contrast, employees at the other company reported much greater satisfaction with their experience. They typed search queries as if it were a computer and explained things in detail to make sure the AI, which can’t “read between the lines” and pick up on nuances, would listen to their preferences.
The second group regularly noted how surprised they were when their requests received useful or even unexpected information, and they attributed any problems that arose to typical computer errors.
The results are clear: Treating technology – no matter how human-like or intelligent it may seem – as technology is the key to success when interacting with machines, says Paul Leonardi.
Let us know your thought in the comment section, or in our telegram chat.