Besides cleaning up our new labs space, we’ve been pretty productive this week and I want to give you a quick update on the progress with Voice User Interfaces. yAlexa (see previous post), our prototype around Hybris as a Service and Amazon Alexa, is taking shape. This week was devoted to adding a demo UI for keeping track of the voice actions directed at alexa. In addition, I’ve created a technical architecture that I quickly wanted to share.
Technical Architecture Overview
The key components of yAlea are:
- the hardware such as Amazon’s Echo Dot
- the Alexa Voice Service
- a custom skill which includes configuration such as sample utterances
- the yAlexa service which includes the custom logic for managing the YaaS shopping cart and
- the YaaS services for our commerce use case
For the Demo UI, I created a bootstrap-based UI that shows the current status of the cart. All answers from Alexa are shown on that screen (also the negative/error responses). The top shows the products that are configured for the current tenant, here we see a bunch of typical fridge items. A bit below you can also see that I am showing some example utterances which should help with doing the demo. After all it is still a bit complex to think of each and every possibility that someone might say to “add sth. to the cart”. As voice actions are spoken and understood by Alexa, our yAlexa service will parse the requests and perform the corresponding actions against the commerce system of YaaS. For example, it will add to the cart or check out the cart.