Bullseye partially open-sourced – have a look!

Since we introduced Bullseye, a Hybris-as-a-Service (YaaS) based prototype around in-store customer engagement & commerce, the first time at the Hybris Summit ’16 in Munich, we’ve been showing and replicating it across the globe like crazy. We’ve even had companies like BASF do public trials in their stores and just as I write these sentences, we’ve signed up showrooms in Singapore and Thailand. It’s a truly global prototype, highly flexible in terms of the configuration and running on our beloved YaaS infrastructure in the cloud.

While the software-parts of this prototype (below is an architecture to help you remember) are easy to scale, we’ve had quite some challenges to scale the hardware. Our platforms – containing a small microcontroller, a light sensor and a LED ring – are hand-made, hand-soldered, each with a 3D-printed case which alone takes about 4 hours to print in a decent quality. We’ve created many of these platforms ourselves, spending days and weeks making new platforms for new prototype installations somewhere on this globe.

candy-shop-90x60While we’ve been successful in finding a local electronics engineering company that produced these platforms for several projects already, the platforms still needed to come to our desks to be flashed with the correct firmware and initialized. We’ve so far not been able to outsource these parts, as there’s software involved that we could not easily just hand over to them.

That’s changed now! We’ve successfully  open-sourced all the hardware-facing parts of ourBullseye prototype: take a look at the plat GitHub page! This will greatly facilitate the production of platforms in the future, as the hardware & software of the platforms is now completely available for others. It would also be cool to see variations – we’ve used a light sensor and an LED ring in our platform, but you could easily swap that for other sensors and actuators!

In the end, our new open source project is a great blueprint for connected devices. It will not fit for all use cases of course, but I could well imagine that it works for a lot ideas that people have. Here are a few ideas what you can do/learn with this project:

  • Figure out how we reliably connect a Raspberry PIs to the cloud via MQTT and node.js, upon booting the device
  • Figure out how to send data from the Raspberry PI to connected/wired platforms via USB, potentially with USB hubs in between to scale the number of platforms connected
  • Figure out how to write a serial protocol to collect events from the platforms or send commands to them

Have a look, clone the repo, try it out! After all: Have Fun!

 

Moto Update: the smartphone is now our MQTT/BLE Gateway

It’s time for an update about ‘moto’ – sorry that this did not happen earlier but I’ve been busy with events like #cebit, #iotcon or #internetworld. We’ve now finalized the hardware design and our good friends at DerGrueneFisch are manufacturing a small series of moto prototypes (9 to be exact) in the coming weeks. This also means that I’ve moved on to more software-related challenges instead of hardware challenges.

Moto Architecture Diagram

If you remember the architecture diagram (find it again above), we connect the motos wirelessly via BLE. While I’ve been using some quick & dirty node.js based scripts on my mac for testing the communication over BLE, I’ve now written an Android app that acts as a MQTT/BLE gateway. Powered up, it will launch two services: the BLEService and the MQTTService. These services are started and then continue to run in the background. They are loosely coupled via Android Intents. Right now, we fire up the services when the Android Activity (that’s what is “shown on the screen when an Android app fires up) is shown. And we stop these services again, once the app becomes invisible. This is really convenient for testing, as we tear down/fire up the services a lot which is great for testing.

BLEService
This sticky service (meaning the system may restart it if it was removed due to resource constraints) is scanning for new, non-connected motos and will then try to connect. Once connected, we subscribe to notifications for one BLE characteristic which acts as  the event stream from the hardware. We also save a reference to another identified characteristic that we use to send our commands to. In order to react to commands and be able to forward events, we use Android intents. The BLEService registers listeners for all intents that the MQTTService is sending out, as they contain the moto commands that need to be forwarded to the moto’s. The BLEService also maps the incoming commands to the corresponding motos and – new – now is namespaced. That means the users of the Moto Android App will later be able to choose their namespace so the analytics data is kept separate from others.

MQTTService
For MQTT, we’re using the only Android/Java MQTT client we were able to get: Paho. Although there seems to be an existing Android Service wrapper around the Paho MQTT client, that one is little documented and it really was simpler to create our own service that does exactly what we want it to do. The MQTTService is again sticky and should be running all the time. It tries to keep a constant connection to the MQTT broker that we host on Amazon EC2. It is subscribed to all commands that fall into its namespace, e.g. moto/<namespace>/+/command – which is an MQTT topic with a +wildcard, meaning it will receive messages sent to moto/<namespace>/1/command for example.

Getting MQTT or BLE running on Android alone and for  a small demo is pretty easy. The complexity comes one you try to connect to multiple devices at once, because the Android BLE APIs are synchronous and firing too many BLE requests at once will simply override a few requests sent. So one has to work with a few delays and timers here and there to make sure it really works reliably. The idea is also, that sales agents with the app installed can roam freely and if one is close to the BLE devices, their phone/app will connect transparently. So far, this works realy nicely. After a few seconds outside of the coverage area, the BLEService starts to receive disconnect callbacks and we start removing the moto element from the list of connected ones. This will enable it to be added by another sales agent and his/her device that has the app installed.

The Protocol
At least for now, I’ve also frozen the “protocol”, e.g. which characteristics are used, what data is sent, how it is determined what is possible and what not. First of all, for sending and receiving data from/to the moto elements, I use two seperate BLE characteristics. This simply keeps everything a bit more organized and easier to understand. For sending from the BLE hardware to the smartphone, struct-based events like these are used (this is straight from the Arduino IDE):

Mainly due to issues with setting up multiple BLE notifications from Android at once, I decided to distinguish the two events that I send out via the first byte – see the “eventType” byte which is different for a PresenceData Event and MetaData event.  MetaData Events are sent our regularly to inform the smartphone and the server later that a device is live. We visualize the MetaEvents again via heartbeats. You can tell within 10 seconds if a device is connected or not. The PrenseceData Events are sent whenever the presence state (customer in front/customer lost) changes. Just like with tiles, we also calculate the duration of the presence directly on the device.

For incoming data, so-called moto commands, the protocol is slightly more complex. We distinguish between two broad categories of commands:

  • “standard” commands can change the current RGB colors and the motor state (this includes on/off, direction and speed level of the motor)
  • “special” commands are distinguished from normal commands by the value of the first byte. To be able to extend the command mechanism, they introduce a “subcommand” byte as the second byte. From the third byte on, the special command’s data is sent. Right now I’ve specified a “blink” command that will blink the RGB pixels for a certain duration in a certain color. Another command implemented is a rainbow chase, so the pixels will update according to a color wheel which looks like a rainbow in the end.

Some code example showing how I deal with incoming commands is below:

Android UI Adapter
One last element that got a lot of love from me is a special UI Adapter for the Android app. There’s nothing super special about this data/UI binding, it is just a lot of work. The UI will later try to come close to the action that the moto element is performing: if it blinks, the UI element in the android app will blink, colors will be reflected as well as possible and of course the spinning status will be represented. Once I have a few motos connected at once, I will shoot a few pics and show this to you in an update.

Up next
Now that I have the hardware specced out and a running gateway prototype via the Android App, the next thing that I’ll spend time on is the server side that collects all the data. This will also include a RESTful API to control each moto element, client/server communication via socket.io for the UI and early ideas for the skinning. I hope to receive the first elements of the produced series within 2-3 weeks and will try to update you on the progress made.