Another topic at this year’s Mobil Tech Conference was conversational. Especially in connection with voice assistants. In addition to the Design Patterns for Conversational User Interfaces presentation and the keynote How Conversational Commerce is radically changing shopping, there were also talks about Alexa and Google Home.
(featured picture: The Conversation from Arnold Lakhovsky)
Design Patterns for Conversational User Interfaces
Whether it is speech or text, humans have the following characteristics:
- They only have a short period of time for interaction, if no images are available.
- In a selection list, they remember a maximum of five things.
- But they also learn dialogue patterns.
I mentioned the dialogue pattern in my last year blog post In-App Payment in a Text Adventure with Alexa and “Alexa ask …” still feels weird, but humans will adapt.
With the help of design patterns you can help the human being
By establishing a dialogue with questions and answers, it is possible to request a specific product from a complete portfolio in 5 to 10 dialogue steps. Mixed-Initiative Interaction is very helpful here, in which you capture two steps, when the object is also named with a color.
Another possibility is a question on initiative, such as “The same pizza as last time?“. This requires an intelligent system with e.g. machine learning in the background.
Furthermore, a certain logic is also useful. If I set an appointment at 2 o’clock, then the application, whether text or speech, should assume that I sleep in the night and the appointment should take place at 2:00 PM.
The lower the cognitive level, the easier it is for humans to communicate with the system. If we have a monitor in front of us, then we see how far we are with the order (shopping cart, delivery dates, address, …). But if we talk to the system by voice (and also text), then we don’t necessarily know what lies ahead of us. Points of reference such as “almost done” are helpful here.
How Conversational Commerce radically changes shopping
Achim Himmelreich’s keynote was about the fact that speech will replace text (chat bots) in the near future. And, brands have to deal with the subject of voice assistant, because otherwise it will be difficult to get attention in the future, when Alexa and Google are in the living room. Voice search is limited to a maximum of three as with the web search, which displays a maximum of ten results (if you are not using DuckDuckGo).
The confidence in language is greater than in writing, as the spoken word is not visible and can therefore be better forgotten. In addition, the user also wants a dialogue, as in the old days, when he spoke directly to the salesperson (like in a German Tante Emma Laden).
Language is the strongest here and Google has a 95% accuracy. Teenagers no longer write in WhatsApp they speak voice messages (and think Facebook is for old people). When people use voice, you have to be there as a brand. One example is Staples (together with IBM), which have designed a language assistant for the office to order office supplies by voice.
So, will we see Charlize – the sister of Charly the chat bot – as a language assistant?