yAlexa Technical Architecture & Update for our Alexa-based prototype

Besides cleaning up our new labs space, we’ve been pretty productive this week and I want to give you a quick update on the progress with Voice User Interfaces. yAlexa (see previous post), our prototype around Hybris as a Service and Amazon Alexa, is taking shape. This week was devoted to adding a demo UI for keeping track of the voice actions directed at alexa. In addition, I’ve created a technical architecture that I quickly wanted to share. Continue reading

Fun with Alexa & Hybris as a Service: yalexa

It’s a shame I’ve not written earlier about this. We’ve got Amazon’s Alexa and also Google Home available at Hybris Labs in Munich, but I’ve had so much other things going on, that I just could not concentrate a lot on this. Today,  I finally had a few hours play a bit more with Amazon’s Alexa. While I need to do more with Google Home, I’ve tried both to some degree now. I find the overall programming and configuration simpler, although Amazon is also trying to totally lock you in of course with the Lambda functions on EC2 – but you have a choice and my choice was to use my own Cloudfoundry based backend and YaaS APIs to implement the business logic. Continue reading

General Update & Final Architecture Diagram for Expose

It’s been a while since I wrote about expose, but finally I am sitting at the Munich airport again, which is my favorite time to write blog posts. From a technical point of view, expose is in the final phase of being polished. We’ve worked with the designers at SNK to create great user interfaces, ironed out a few bugs here and there and are currently thinking of two showrooms (Munich and New York) to install this prototype. While these discussions and the details will need a few more weeks, I think technically this prototype is locked-down and done. So it’s time to take a final look at it and wrap it all up. Continue reading

Bullseye partially open-sourced – have a look!

Since we introduced Bullseye, a Hybris-as-a-Service (YaaS) based prototype around in-store customer engagement & commerce, the first time at the Hybris Summit ’16 in Munich, we’ve been showing and replicating it across the globe like crazy. We’ve even had companies like BASF do public trials in their stores and just as I write these sentences, we’ve signed up showrooms in Singapore and Thailand. It’s a truly global prototype, highly flexible in terms of the configuration and running on our beloved YaaS infrastructure in the cloud.

While the software-parts of this prototype (below is an architecture to help you remember) are easy to scale, we’ve had quite some challenges to scale the hardware. Our platforms – containing a small microcontroller, a light sensor and a LED ring – are hand-made, hand-soldered, each with a 3D-printed case which alone takes about 4 hours to print in a decent quality. We’ve created many of these platforms ourselves, spending days and weeks making new platforms for new prototype installations somewhere on this globe.

candy-shop-90x60While we’ve been successful in finding a local electronics engineering company that produced these platforms for several projects already, the platforms still needed to come to our desks to be flashed with the correct firmware and initialized. We’ve so far not been able to outsource these parts, as there’s software involved that we could not easily just hand over to them.

That’s changed now! We’ve successfully  open-sourced all the hardware-facing parts of ourBullseye prototype: take a look at the plat GitHub page! This will greatly facilitate the production of platforms in the future, as the hardware & software of the platforms is now completely available for others. It would also be cool to see variations – we’ve used a light sensor and an LED ring in our platform, but you could easily swap that for other sensors and actuators!

In the end, our new open source project is a great blueprint for connected devices. It will not fit for all use cases of course, but I could well imagine that it works for a lot ideas that people have. Here are a few ideas what you can do/learn with this project:

  • Figure out how we reliably connect a Raspberry PIs to the cloud via MQTT and node.js, upon booting the device
  • Figure out how to send data from the Raspberry PI to connected/wired platforms via USB, potentially with USB hubs in between to scale the number of platforms connected
  • Figure out how to write a serial protocol to collect events from the platforms or send commands to them

Have a look, clone the repo, try it out! After all: Have Fun!

 

An update on expose: now adding a party booth

Finally an update on the latest developments around expose, our location / action tracking prototype that we develop on top of YaaS. You might remember that we track the location of RFID labels via the location readers. Besides locating the labels, we also have developed an “action reader” subsystem that is used to engage with the user of the RFID label on a 1:1 basis. For the action readers, the user has to actively place his RFID label close to a small matchbox antenna to be scanned. Below is the updated system architecture: Expose Technical Architecture (1)

While the architecture / framework for all action readers is the same (they send their scanned labels to a common backend API), we reference the correct screen that is intended to be shown in the tablet screens based on the specific MQTT topics that are used. The action readers post to the backend including the tenant/reader Id information which will forward the data to the appropriate screen, connected via Socket.IO.

Right now we have completed these action reader setups:

  • signup: a kiosk where new users with fresh RFID labels are onboarded or may change their data
  • bar: a kiosk where either an employee or a barkeeper can log some drink that he takes out of the fridge.
  • party: a party booth that allows you to have a personalized party based on the data that we know about the user.

For this post, I wanted to specifically pick the party action reader system. It consists of:

  • an action reader that is tied to the
  • tablet screen for the party booth and
  • the party booth itself

The action reader system looks like this:

Expose Action Reader - Technical (1)

The real fun comes in when you look at the party booth. It’s a pretty nice system with a raspberry PI at its core.

Expose Action Station - Party Booth Technical (1)

 

Right now, the booth looks rough 🙂 But we’re in discussions with a local artist to create some booth building/boxing around this. It’s already a lot of fun to use, believe me!

IMG_20161125_134002 IMG_20161125_134000

The software of this system is running on node.js, starting automatically upon booth and so far quite stable. The sequence for using the booth is this:

  • a new user comes into  the booth and holds his RFID label close to the action scanner
  • the tablet screen (here: our TV screen) is showing a welcome message and the color and music choice of the customer. These data are left by the user during the onboarding/signup process
  • the party booth raspberry pi will start playing back the music according to the profile.
  • the dotstar LEDs are colored according to the profile – in combination with the rotating disco ball, this creates a nice atmosphere in the booth later
  • the fog machine turns on for a few seconds, so the bottom of the box will be filled with fog
  • while the user is in the booth and the music is playing, pictures are taken via the raspberry pi camera. These pics appear in real-time on the tablet screen.
  • once the party is over, all pics are aggregated into an animated gif and again shared to the tablet screen.
  • the user can now select one image an it will be shared to the ylabsparty twitter account. have a look it’s already pretty cool!

All right – good for now, ready for the weekend. I hope to update you soon again, till then follow us via the ylabsparty twitter account!