In other words, unlike other products, people judge AI the way they judge people. (If people give a slower answer to a question, we tend to assume it to be a more thoughtful one.) In still other words, study participants believed something that wasn’t true.
There’s just one problem: Armed with this data, the researchers advise AI developers to implement “context-aware latency” by abandoning a one-size-fits-all approach, using latency as a “tunable design variable.” Simple questions, they say, should get a quick answer. More complex questions, including moral dilemmas, should “feature” slight delays to match the request’s gravity. They call it “positive friction.”
The researchers claim it would be a good practice to trick users into believing an AI chatbot is considering their answer more than it really is — because users will be happier in their delusion that AI is like people, who need more time to mull over serious questions.
Read the full article here

