Why is Alexa written to sound human?

Many virtual assistants and chatbots are written to seem human. George investigates why, if it’s necessary and where writers come in.

Why is Alexa written to sound human?

“Umm, I’m sorry, I didn’t understand the question.”

When uttered by a perplexed interviewee or bemused passer-by, this phrase wouldn’t be out of place. When it’s spoken to me by a piece of software trapped in a box, it comes across as an oddity.

But it’s exactly what Alexa, Amazon’s virtual assistant, says to me when I ask the voice-activated Echo Dot yet another obscure question.

You and I know Alexa isn’t really a person. She’s a bot, one with a series of lines written for her by a team of savvy writers at Amazon – a team that deliberately chose to add this human inflection. But what place does humanity have in AI, bots and virtual assistants? And what skills do writers need to make sure this humanity comes across as genuine?

Writing the ‘virtual’ out of virtual assistants

Despite Alexa’s very-much virtual existence inside a box, she has been written to dot her lines with very human-sounding ‘umms’. Long, drawn-out sounds like “umm” are usually stallers: subconsciously used by you and I to buy more time to come up with the answer to a question, or the next essential beat in a conversation.

Alexa does not need this. I know, because I’ve seen the Echo Dot load in answers in a different way. It simply takes longer to return a query, with the ring of lights pulsating to show that Alexa is diligently racking her digital brain for an answer. So why does Alexa try and replicate human speech, and placate us with a natural sounding “umm” that further delays her response to a query?

It’s not just Alexa who’s out there ‘faking’ humanity. Apple’s Siri similarly says it “didn’t quite catch that” when the mic doesn’t pick up your query. But that’s the thing, it’s your phone’s hardware and software not quite working in tandem that results in your misheard request, not some magical human living inside your iPhone being slightly hard of hearing.

Putting the ‘A’ in AI

On the other end of the spectrum, some of the virtual assistants out there are being written in a way that embraces their virtual-ness.

Google Now and the Google Assistant are prime examples, happily dumping a web search or info card onscreen for any query it finds too complex to answer immediately – shattering the fiction and reminding you you’re talking to a phone, not a person.

Meanwhile Cortana is also happy to embrace her place as an AI, represented by abstract icons and animations that make her appear virtual. She’ll even tell you she’s a robot if you ask her nicely enough. Considering her name also comes from the AI companion in the Halo game series, it’s also clear that Microsoft is happy for Cortana to be seen as a very non-human helper.

Writing humanity in

So why are some writers masquerading their virtual assistants as human beings when we know they absolutely aren’t?

A big part of it is to do with usage patterns. ComputerWorld spoke with the writers behind Cortana and found that “part of the craft of virtual assistant character development is to create a trusting, respectful relationship between human and assistant… If you don’t respect it, you won’t like it. And if you don’t like it, you won’t use it.”

But exactly how much does that respectful relationship hinge on the fiction that your AI helper is human-like?

The same ComputerWorld article goes on to examine humanity in AI with Intel’s Director of Intelligent Digital Assistance and Voice, Pilar Manchon.

Manchon tells us: “when we interact with a virtual agent, we’re compelled to behave in a specifically social way because we’re social animals. It’s just how we’re wired. In order for users to feel comfortable with a virtual assistant, the assistant must exhibit… social intelligence, emotional intelligence and more. Not doing so would make the agent unlikeable in the same way and for the same reason that a real person without these traits is unlikable.”

This doesn’t necessarily mean the assistant must be human, but this is how the writers of Alexa have chosen to respond to the need for a respectful, social relationship between person and virtual assistant.

It’s all about character

It all comes to down to virtual assistants being loved, and used more often by people. Robyn Ewing, TV and film writer turned AI wordsmith summed this up when she told the Financial Review that for most users, it’s often easier and quicker to get the info you need online without the help of a virtual assistant, “so if the character doesn’t delight you, then what is the point?”

With this in mind, it seems less about humanity, and more about a specific, authentic and relatable character. In fact, Cathy Pearl, director of user experience at Sense.ly, argues that people are more forgiving of mistakes made by an AI that presents itself as non-human – provided it has a sense of humour about any blunders.

If you’re currently building a chatbot, remember that it isn’t enough to simply inject some humanity into the dialogue. You have to give it some authentic, consistent character as well. So, if you haven’t hired any professional dialogue writers, it could be time to start putting up some job ads.

Insights

Want to stay up to date with the latest B2B copywriting trends? Sign up for our monthly email.


Related Posts

Chat bots: the next big opportunity for copywriters?

Forget apps; all the cool companies want you to interact with them via bots. Wo King from Hi9 talks to us about why that’s exciting news for copywriters.

Comments

Sign up to our monthly email

Copywriting tips and insights — delivered to you every month.