General Update & Final Architecture Diagram for Expose

It’s been a while since I wrote about expose, but finally I am sitting at the Munich airport again, which is my favorite time to write blog posts. From a technical point of view, expose is in the final phase of being polished. We’ve worked with the designers at SNK to create great user interfaces, ironed out a few bugs here and there and are currently thinking of two showrooms (Munich and New York) to install this prototype. While these discussions and the details will need a few more weeks, I think technically this prototype is locked-down and done. So it’s time to take a final look at it and wrap it all up.

So again, what is it all about? 

There’s two perspectives that we can take. The technical and the business perspective.

From a technical point of view, expose is a Hybris Labs Experiment that combines RFID and IoT technology together YaaS. The location subsystem constantly scans for registered RFID labels and tries to determine the best possible location. The data is analyzed and yields an individual journey per user of the system as well as overall location analyticsThe action subsystem allows individual participants of this experiment to interact on a 1:1 basis with the system. Current action stations include the signup station, the bar station, the journey station and the party booth – all four offer personalized interactions based on the customer’s previous behavior.

Business-wise, such a system can for example be used at events and showrooms. Via the technical means described above, we can track the location of RFID labels, which could be attached to a visitor’s event badge. From an event-coordinator’s perspective, real-time analytics where people are, where people are likely to go and what they do (action subsystem) can be offered. While the backend-users of such a system gain insights into their event and the flow of users, there’s something in for the visitors that carry the RFID labels, too. They can interact at various action points on a one-to-one basis. This means the barkeeper will remember your name and favorite drink, the event host might be able recommend people to meet based on your interests or location history, etc.

As I am a technical guy, let’s concentrate on the technical a architecture diagram first- have a look:

expose :: technical architecture

expose :: technical architecture

From bottom to top, you can see these layers of the expose system:

  • The very basis of the system are micro-services powered by YaaS – Hybris as a Service. This prototype uses quite a lot, so after registering a user, we’ve got a new customer account via the customer service, the products at the bar of course are maintained via the product service. A purchase / order results in a cart being checked out for the customer.
  • Once again, we’ve extended YaaS with custom micro-services for expose. Our RFID readers send HTTP Post requests in regular 3s intervals and the endpoint for those are part of the expose service – part of the expose package. To be brutally honest with you, at this point the configuration is rather static within this service, but at a later stage we could manage it on a tenant-by-tenant basis via a builder module. Totally accurate though in the above diagram are the user interfaces which are rendered by the expose service.
  • We’re now touching the physical world, with RFID readers installed at various locations of a showroom and at the action stations where users can interact on a 1:1 basis. Our default setup will use 5 locations (Impinj Speedway Connect readers) and 4 action points. The latter are Raspberry Pis which my colleague Lars Gregori extended with a custom shield. We attach a small antenna to them so users can put their RFID label on top to have it read. The location readers are constantly sending the scanned RFID labels to our service, where we process the location information with some self-made algorithm and store the data in the YaaS document storage.

 

The map and dashboard

 

 

 

The 4 action point UIs – signup, bar, journey and party booth.

Signup

   

Bar

 

Journey

 

 

An extra paragraph on the party booth

The party booth will be an awesome action point of the expose system and it’s a bit crazy, I agree. I have to think of a nice way of showing you the UI’s, so give me a few days after my current trip to get that done. It shows how we can interact with visitors on a 1:1 basis with the help of YaaS. It will load the data that a visitor left at signup and create a personalized party experience. At the moment, we’ve specified how the party booth will roughly look like and a local artist, Andreas Kraeftner from Munich is working on the physical fabrication. We use metal, glass, wood and it all will be combined with the electronics like Raspberry PIs, LEDs, loudspeakers, a disco ball and a fog machine. The booth will take pictures via a camera connected to the raspberry pi within the booth and it will create an animated GIF in the end that users can post on twitter.

So yes, it’s crazy. And different to many showcases you’ve seen before. It’s okay to be different!

 

And about 500 Customer Accounts later…

The #HybrisSummit is over, it was a blast. As our bullseye prototype is based on YaaS, we’ve collected a lot of data and I quickly wanted to share that with you. So here we go: we’ve had >500 customer accounts created &  >1200 product liftups in just two days! That means >500 demos that we’ve given, very often more have been watching of course!

IMG_20160216_111405

Screen Shot 2016-02-18 at 8.42.01 PM Screen Shot 2016-02-18 at 8.35.15 PM Screen Shot 2016-02-18 at 8.34.57 PM

If you look a bit deeper, you’ll see that the top product was the “Toucan”, our very colorful and sweet/medium style candy. The data that we’ve collected for the profile questionnaires is also really interesting. It will help us with upcoming orders: about 50% of all customers answer they like sweet candy, followed by sour and salty.

Hope you like it, please reshare & comment as you like!

Enter the next Dimension…

…of Customer Engagement!

Oh no, marketing blabla!! … Pfff, so what. This prototype is cool enough to take it! Well it probably is… will be… okay, we still have to build it. But it will definitely be ready for the SAP Hybris Summit!…we guess. It all starts with this box:

IMG_1996

Inside it is not our new prototype but an essential piece of machinery belonging to our new partners NavVis. It’s an M3 Mapping Trolley used for “easy and fast 3D mapping of entire buildings in photorealistic quality.”

IMG_2012

It was fast and also very easy. That’s why Max was allowed to have a go…

IMG_2013

We had our Labs Innovation Space scanned and will create some kind (TOP SECRET!) of virtual shopping world integrated with Hybris YaaS. Again, visit us at the Summit to see the real thing for the very first time! To give you a taste here’s a link to a demo showcase provided by NavVis.

Bullseye – in-store targeting and analytics – an update

Yes, it’s more than  a month ago since the last update. The assumption was right – tons of work was ahead of us. And still things need to be optimized, beautified and fixed… but an end is in sight! We’ve made huge progress… we got products, we got a name, we overhauled the technology and worked with the designers to beautify the whole prototype. All while maintaining full YaaS compatibility and flexibility.

IMG_20151214_144959#1 – the name:  bullseye. We think it’s great for a prototype about in-store targeting and analytics.

#2 – the products: we switched from perfume to candy. This morning, I got the confirmation for >180kg of candy delivered next week to the hybris labs premises here in Munich 🙂

#3 why do we need all that candy? At this point, we also got full confirmation to make this prototype the central piece of art/technology at the hybris summit 2016. We’re running our recommendation and analytics system for two days at this event. If you can, please stop by!

In case you’re completely unsure what this is all about, here’s a brief summary – directly form the documentation that I’ve written yesterday.

Screen Shot 2016-01-15 at 3.37.30 PM

So let’s take a more technical view on the updates. Below is the current arch diagram, also soon to be even more shiny. For now, all technical goodness is on it. Let’s step through each part.

Bullseye - plat Technical Architecture (1)

platforms

bullseye is a YAAS-based in-store product recommendation and analytics system. The end-user facing and visible components are the platforms (here numbered from 1-8) that can be programmed via commands in various ways. For product selection, for example, the platforms may receive color events. The platforms also contain one or multiple sensors whose values can be requested. For power supply and communication, the platforms are connected to a base station via a standard Micro USB cable. As a typical IoT edge device such as a Raspberry PI has only a limited number of USB ports and limited power supply, standard USB 2.0 hubs are used in between the base stations USB ports and the platforms.

Recent updates here, not including bug fixes:

  • We’ve optimized the serial communication – previously all commands sent to the platforms (serial, byte-based communication) responded with an event that repeated the data for confirmation. We’re only responding with a JSON-formatted response in case the command triggered an EEPROM update or a sensor reading.
  • We’ve implemented new commands, mainly for light effects. The platforms can now flash in RGB colors (random flashing, simulates a “thinking” system” and a few other color effects.

Next, the base station – that’s where the platforms connect to via USB cable:

The base station is acting as a gateway between the platforms and the internet. It connects to the internet via the IoT standard protocol MQTT and to the individual platforms via a serial, byte-based communication protocol. The base station is subscribed to a MQTT topic for the base station itself and issues commands that are sent to this topic to all connected platforms. At the same time, it is also subscribed to individual topics for each platform – this allows each connected platform to be addressed individually. The central communication broker in this system is a MQTT broker. It is the essential element that connects the base stations to the internet and allows the remaining bullseye system to send commands to the platforms and to receive events from them.

Recent changes:

  • We can now address the base station itself with a dedicated MQTT topic and the base station forwards the command to all connected platforms. This minimizes the network load and is great in combination with the “flash” command. We use this command to simulate a thinking system, right before the results are presented with individual lit-up platforms.
  • MQTT reconnect behavior: we’v fixed a major bug around the reconnect behavior. From time to time, our MQTT connection is closed due to network issues. We’re now able to subscribe to all platform topics so that the system stays fully functional.
  • Our base stations are now Raspberry PIs and we use the excellent forever-service to start our node process in a “forever” fashion. Even if the process is killed, forever restarts it immediately.

Let’s move on to the bullseye YaaS package:

The bullseye package is part of the YAAS marketplace and can be subscribed to by a client project. The package contains the bullseye service, offering various UIs for the end user and retailer, and also the central matching service.  The matching service is a completely tenant-aware service, that uses the profile input from a customer questionnaire to score a selection of products. The result is a scored list of products that are mapped to platform IDs and then addressed with a color command over MQTT. This results in a physical selection of products based on a previous product matching algorithm. The bullseye service also contains an internal analytics module that is powering various analytics UIs. The bullseye builder module is used by a client project to configure the bullseye system. Typical configuration includes the mapping of products to platforms and the setup of a customer-facing questionnaire with scoring information for each correctly answered question.

Changes, well, tons. Let’s see:

  • The matching service is the central component, taking the profile info and matching it with products based on the questionnaire and scoring information. We’ve now implemented a proper blocking behavior – a user enters a session and operates the base/shelf alone for 30 seconds. If he chooses to keep using the shelf, the session is extended.
  • We’ve created 3 analytics screens that connect to real-time data via socket.io channels. All analytics is in memory, which is OK for a prototype. It’s nicely part of a single node.js library and could easily be persisted. Below are some of the analytics screens. More later on when the screens have been completely redesigned.
  • The questionnaire / form UI is already beautified. in the pics below, you see the result. It works excellently on desktop/tablet/phone, fully responsive. Form resubmission for flashing effects and the back button to play again complete the changes here.
  • A product info screen, connected to live product liftups, has been added.
  • A randomizer feature will perform a slideshow among the 3 analytics and product info screens. For events, we’re able to launch it and it just runs and shows all screens over time.
  • More… but this is getting too long…

So – you probably want to see some pictures, right? See below. What’s next? While a lot has been done, we’re still working on finishing touches. Potentially we’ll save the product results to a customer’s cart – so when she opens the cart later at home or at the event, the cart is prefilled with the matching products. We also need to take care of all kind of UI related small issues and we need to make sure the logistic for the hybris summit are taken care of. We’re creating an amazing construction, a pole-like art installation, together with our booth builders here in Munich. Stay tuned!

The state of the end-user questionnaire UI:

Screen Shot 2016-01-15 at 3.19.21 PM Screen Shot 2016-01-15 at 3.19.33 PMScreen Shot 2016-01-15 at 3.19.25 PM  Screen Shot 2016-01-15 at 3.19.40 PM

The state of the builder module:

Screen Shot 2016-01-15 at 3.21.16 PM Screen Shot 2016-01-15 at 3.21.25 PM Screen Shot 2016-01-15 at 3.21.39 PM Screen Shot 2016-01-15 at 3.21.52 PM

And finally some of the early analytics screens:

Screen Shot 2016-01-15 at 4.15.48 PM Screen Shot 2016-01-15 at 4.15.54 PM Screen Shot 2016-01-15 at 4.16.12 PM Screen Shot 2016-01-15 at 4.16.22 PM

In-Store Targeting and Analytics – on YaaS!

It’s finally time to write about a new project we’re working on. Hopefully this also helps to clear up a few open issues we’re still working on. So here’s some news about a project we’ll probably name “bullseye”. To some degree it is an extension of the wine shelf. But it’s super flexible in terms of configuration and products. And – boom – it’s almost 100% based on YaaS, the new hybris commerce APIs.

Architecture, rough… 

This architecture is rough and can change any moment, but it’s a good ground to describe what this is about. The idea itself – again – is about selecting products in the physical retail space. And also about providing feedback to the retailer about physical interactions with products. YaaS plays a big role as we use the YaaS Builder Module system to edit all the configuration of the system. We’ve also written our own YaaS service, that provides the product matching logic in a completely tenant-aware fashion.

Bullseye - plat Technical Architecture (1)

Platforms and Bases = Smart Shelf

From a technical perspective, the hardware used is less impressive. It’s really not the focus this time. We’ve worked on a 3D-printable design that contains the electronics for the hardware parts of this prototype. Each of the platforms below (so far we have about 20 fully working platforms) contains a microcontroller for the logic, a large 24 NeoPixel LED ring (output) and a LDR (light dependent resistor, input). The platforms connect via Micro-USB to a base (power, serial data), which most likely will be a Raspberry PI again. In between, we need standard USB 2.0 hub, as  a Raspberry PI has only 4 USB ports and we would like to power as many as 20 or 30 platforms from one base. Check out some images below.

IMG_20151210_101416 IMG_20151210_103701
IMG_20151021_153508 IMG_20151210_103708

The firmware that runs on the platforms is able to receive a few commands over a custom serial protocol. Via this protocol, we can change the identity of the platforms (stored in EEPROM), read the sensor value or issue a light effect command (e.g. turn all pixels on, turn them red). It’s a fairly low-level, basic, communication protocol. The only business-level logic that so far still runs on the microcontrollers is the calculation of liftup times. We count the duration between the increase of light (product lifted) and the decrease of light (product down). To not interfere with the NeoPixel (light) ring, we’re blocking the event calculation during the light effect execution.

The bases, most likely Raspberry PIs, each have a unique ID. The platforms, again, have unique IDs. Via MQTT (node.js using MQTT Client Software) we can issue commands to the bases and to the platforms directly.

MQTT Broker

An important architecture component that we can’t live without is the MQTT broker. Due to port restrictions and other technical issues, this part is currently outside of the YaaS cloud. For now, the bases connect to the broker to connect the platforms over serial. The bases subscribe to MQTT topics that match the platform ids. They also subscribe to a base-level topic, so we can send base-wide commands. If a platform disconnects from a base, we unsubscribe from the MQTT topic of that platform. This ensures that the communication bandwidth required is lightweight.

YAAS Builder Module

The builder module that you get once you subscribe to our package in the YaaS Marketplace allows you to configure the physical mapping and the questionnaire that the end-user finally gets to see. The products derive from the products you’ve configured via the YaaS product service. Below are a few honest screenshots, before we even started styling these screens (be kind!).

As a user, you’ll first have to choose a shelf, which is identified by the id of the base. Next, you choose which product category you’re creating the recommendation system for. All products of the shelf need to adhere to a common set of attributes, hence the category. Third, you’ll assign the products of that shelf/category combination to platform IDs. Finally, the scoring configuration – which questions, which answers, which score per correct answer is specified. The scoring configuration is the key ingredient to the end-user questionnaire form. Once all four steps are completed, the retailer is given an end-user URL that can be turned into a shelf-specific QR code (or put onto an NFC tag, or put onto a physical beacon or shortened and printed, etc.).

Screen Shot 2015-12-10 at 11.37.10 AM Screen Shot 2015-12-10 at 11.37.13 AM
Screen Shot 2015-12-10 at 11.37.17 AM Screen Shot 2015-12-10 at 11.37.35 AM

YaaS Matching Service

Our matching service is triggered by a special URL that goes through the YaaS API proxy. All requests and bandwidth is counted and can later be billed. The end-user experience begins with a rendering of the questionnaire. The user chooses his answers and sends the data off to the matching service. The matching service now pulls the scoring configuration, the products and the mapping to calculate the matches. Based on the relative threshold, we calculate which products and therefore physical platforms are highlighted. Now, MQTT messages are sent out to the bases/platforms to highlight the appropriate platforms.

Screen Shot 2015-12-10 at 1.08.54 PM  Screen Shot 2015-12-10 at 1.09.03 PM

Once a customer uses the system via a questionnaire, the shelf belongs to her for the next moments. This means we block access to the tenant/shelf combination for some time. During that time, the user is interacting in a personalized session with the shelf. Lifting up a product results in the display of detailed information directly on the customer’s tablet or smartphone. And of course, it fuels a few analytics displays that still need to be detailed.

What’s next? tons of work ahead.

We’re working hard on the specs for the initial version of this prototype and some sample products, categories, configuration that we’ll use for the hybris customer and partner days in Munich 2016 (early February 2016). But we’re also thinking of a few extra features that might make it into the prototype by then: for example, we’re thinking of a stocking mode, in which the platforms highlight one after each other and the screen shows you the product that needs to be on. It helps both the labs member to setup a demo as well as the retail employee to stock a shelf. And we’re thinking of sending the recommended products via email. A customer could then continue the shopping at home which a pre-filled cart.

Got ideas? Let us know. This is the time to provide input!

It seems Lars lost his brain – in Future

During Oktoberfest many people lose something like an umbrella, a shoe and control. But it seems Lars lost his brain that lies on his desk with some LEDs under it.

“Lars, is this your brain?”

“No, Lars, this is not our brain, it is just some slimy gimmick, which looks like one. I’m fine, don’t worry.”IMG_0433

“And why are there LEDs under it?”

“This whole thing is a prototype for a project. First of all I’ve connected a group of three LEDs and let them fade and pulse. The next thing was to control it with my Laptop. Therefore I’m using the USB connection to the Arduino and send the CPU idle time of my Laptop as numbers. The more my computer thinks the faster the brain pulses.”

“Can you say something about the project?”

“Yes, but we can also time travel to it.”

“What? Do you think we can time travel?”

to be continued…

How do you fit a bedroom into a shopping cart?…

…or a kitchen? …or a garden? Answer: you don’t. Our next prototype is coming soon! Here’s a short preview:

It’s going to be a prototype for any scenario within which the customer is not able to put the items of interest into a shopping cart or basket. Either because they’re too big, or too heavy, or the items are only samples in a showroom. You could also imagine a B2B scenario. For example a cook of large-scale kitchen collecting ingredients from a wholesale market. The idea behind “Infinite Cart” is that the customer does not have to make notes, or take photos of the potentially desired products. Instead, all the required information will be stored on a device we’re just building ourselves.

FullSizeRender[1]

As you can see, it’s a wearable…will be…sometime…perhaps… In any case, there’ll be some flashing LED’s. Oh, and we’re resurrecting NFC. Apart from all of that it’s going to be integrated with YaaS.

If you have any ideas or suggestions, please contact us!

Moto – "It’s so simple, even Nick can run it…”

Thanks Lars… Have you finished playing with Minecraft?…

So, what is Moto? Moto is the fourth prototype of our IoT series and again we’re shifting the focus of what we want demonstrate. When we built the Smart Wine Shelf, we concentrated on the customer experience. With Funky Retail we focused more on the analytics. Tiles was a step closer towards exploring the technological aspects around mobility. And now finally, with Moto we’re diving even further into the IoT technology and the possibilities it leverages to permanently reconfigure the prototypes functions.

IMG_2566[1]

The ‘active’ physical components of a Moto are a distance sensor, a turning platform, and a LED-ring. Moto connects to an Android app via BLE and uses the device as data hub. The components and their actions can be connected in any way that seems sensible, by the means of a programming tool called ‘Node-RED’. And exactly this the essence of Moto. ‘Node-RED’ allows users without special expertise in coding and programming (like Nick…) to configure an IoT-based system. The actual Moto and the actions taking place merely serve as an example. These actions are displayed in a web page UI through which they also can be triggered.

We’re deliberately not telling a specific business story around this prototype. Basically it’s a bit of plug & play for IoT.

Screen Shot 2015-06-05 at 4.08.13 PM Screen Shot 2015-06-03 at 11.12.29 AM

Screen Shot 2015-06-16 at 5.05.20 PM

Read the more techy posts on Moto here!

First Lego, now Minecraft. Yes, we love our bricks!

20150612_120327_HDR[5]

It is continuously becoming more difficult to sustain a touch of seriousness here… Imagine one of your colleagues just finishing to play with Lego at his desk, when the next starts “working” with Minecraft and shares the fruits on the big screen in the office. Questions do arise… Okay, let’s try to explain.

Lars is currently exploring how IoT and SAP HANA fit together with a very simple approach. He’s measuring the room temperature in the office. Don’t laugh! This is serious stuff! Yeah okay, it does sound a bit ridiculous actually… but here goes: Lars connected a temperature sensor to a BeagleBone Black which sends the data to the SAP HANA Cloud Platform. From there it goes to a Raspberry PI that has Minecraft running on it and is connected to a TV. So much for the basic architecture. Here’s the idea: Lars wrote a script with which the temperature data can be displayed in a Minecraft landscape. Why? Because this is what it looks like in SAPUI5:

Screen Shot 2015-06-22 at 1.14.45 PM

Lars is planning to add mores sensors. Suggestions, anyone? Maybe even with a commerce related background…?

Making IoT visual. Using Node-RED for hybrislabs moto.

Our prototype ‘moto’ is just taking an interesting turn. It’s technically pretty robust, we’re finishing off some UI/product choice things and we asked ourselves: that’s it? While we’ve developed one story for each prototype so far, it seems we’re focusing on different, higher-level issues when it comes to IoT with moto. That’s the issue of how things are wired up, how things can interact and how existing configurations can quickly be changed.

So instead of one story, we’ll have many stories for moto. It will stay interesting, from a technical perspective, too. But the real story is: using Node-RED extensions, we’re able to rewire the logic of moto very quickly. And: it’s a tool for business-users. No hardwired setup.

IMG_20150601_173538 (1)

Just like most other IoT-focused prototypes that we have (Wine Shelf, Funky Retail, Tiles), Moto also has a REST-based web API to control the light and motion. As well as a webhooks-based system to communicate with the outer world. But because moto is also built around MQTT (just like funky retail), it is easy to extend. With Node-RED and MQTT, we have a direct hook into the core messaging system of our IoT prototype. And we’re using the extensions we’re currently working on to make the commands and events easy to wire up. Take a look at a very simple example:

Screen Shot 2015-06-05 at 4.08.13 PM

 

Here, a node triggers every 5 seconds. It will first hit the ‘red, slow, counter-clockwise’ node and the connected moto (here #2) will begin to turn red and slowly rotate counter-clockwise. After a delay of 2sec, it will turn green, move clockwise and fast. And finally after another 1 sec delay it turns off the motor and the LEDs turn white. It then starts over again.

We’re just having the first successes with Node-RED, but it looks very promising to become the brain of our prototypes. It might be smart move, as others (e.g. non-technical people, the business guys, the guys with the smart stories) can rewire moto in whatever flavor they want.

Next up in tech are the input nodes, e.g. moto can send presence events that can start interactions.

Just to wrap up, another example where we took the current temperature in Munich, converted that into a moto command and send it to moto for displaying the temperature via light:

Screen Shot 2015-06-03 at 11.12.29 AM

Let us know what you think!