Five top tips for building accessible conversational interfaces

Leonie Watson of WC3 The Amazon Echo and Google Home top of many Christmas lists this year. Both of these amazing devices can use our verbal instructions to play music, turn our lights on and off, exchange the basics of a short conversation with us, and of course, tell us the weather. Amongst the highlights of our TechShare pro conference in November was a talk by Leonie Watson - who offered her five top tips on creating accessible conversations with our machines.

Legends of talking machines

“There are reports that as far back 1000 years ago people were thinking about the concept artificial speech,” says Watson, director of developer communications for the Paciello Group, a member of the W3C (World Wide Web Consortium) Advisory Board, and co-chair of the W3C Web Platform Working Group.

Legends of talking “machines” go 1000 years back to Pope Sylvester II (950–1003 AD), the first French Pope who supposedly created a very basic first dialog system including components of speech recognition, understanding, generation, and synthesis - according to the essay Gerbert, the Teacher by Oscar G. Darlington in The American Historical Review.

Steve Jobs introduces the first talking Apple Mac

Fast forward to the 1980s, when the first text-to-speech synthesiser by DECtalk was introduced. 1984 also saw Steve Jobs demonstrate the Apple Mac’s talking capabilities for the first time with Apple MacInTalk. See the video below from 3 minutes 20.


"There’s been some good marketing around such technology", said Watson, who has sight loss. "But I've found that talking to tech has been a laborious process - with a person having to speak very, very clearly and with specific phrases for machines to understand. Even then, the interaction has ended up bearing little resemblance to an actual conversation", said Watson.

Siri

“The thing that really changed that was Siri in 2011. For first time we could have something that felt a lot more like a conversation with technology. In 2014 the Windows Cortana launch followed, giving us another digital assistant that would talk back to us.”

“The same year, with the Amazon Echo, we started to see digital assistants be able to do practical things around the house, but we still needed very structured language and to ask very carefully phrased demands to get it to do things," explained Watson. "A further leap forward came in 2015 with Google making it’s technology more context aware. Meaning, for example, if a song was playing, you could ask your Google device ‘where’s this artist from? Or what’s his real name?” without having to specifically state who you were talking about.”

How to build accessible conversational interfaces

Watson laid out five ways that developers could make interactions with machines as clear as possible for a wide-range of people.

1 Keep machine language simple

  • Think about the context of how people might be using the device. They might be driving or cooking and need short, simple interactions.
  • Offer choices but not too many choices.
  • Suggest obvious pointers to useful information.

2 Avoid idioms and colloquialisms

  • ie, terms like “it’s raining cats and dogs” or “coffee to go” might only be understood by certain audiences and so lack inclusivity.

Amazon echo3 Avoid dead-ends in conversation

  • Give the users cues around what to say or ask next to get what they need.

4 Use familiar, natural language.

  • Ie for time, say ‘three thirty in the morning’ for a UK or US audience. Don’t say ‘zero, three, three zero a.m’.

5 Provide a comparable experience

  • Users of such technology will generally require speech and hearing to talk to machines.
  • For those with hearing loss, conversational transcripts could be posted on screen.
  • For those without speech, the only obvious option at the moment is using simulated speech, like Stephen Hawking does, for example.

Learn more

Related Resources