An update on expose: now adding a party booth

Finally an update on the latest developments around expose, our location / action tracking prototype that we develop on top of YaaS. You might remember that we track the location of RFID labels via the location readers. Besides locating the labels, we also have developed an “action reader” subsystem that is used to engage with the user of the RFID label on a 1:1 basis. For the action readers, the user has to actively place his RFID label close to a small matchbox antenna to be scanned. Below is the updated system architecture: Expose Technical Architecture (1)

While the architecture / framework for all action readers is the same (they send their scanned labels to a common backend API), we reference the correct screen that is intended to be shown in the tablet screens based on the specific MQTT topics that are used. The action readers post to the backend including the tenant/reader Id information which will forward the data to the appropriate screen, connected via Socket.IO.

Right now we have completed these action reader setups:

  • signup: a kiosk where new users with fresh RFID labels are onboarded or may change their data
  • bar: a kiosk where either an employee or a barkeeper can log some drink that he takes out of the fridge.
  • party: a party booth that allows you to have a personalized party based on the data that we know about the user.

For this post, I wanted to specifically pick the party action reader system. It consists of:

  • an action reader that is tied to the
  • tablet screen for the party booth and
  • the party booth itself

The action reader system looks like this:

Expose Action Reader - Technical (1)

The real fun comes in when you look at the party booth. It’s a pretty nice system with a raspberry PI at its core.

Expose Action Station - Party Booth Technical (1)

 

Right now, the booth looks rough 🙂 But we’re in discussions with a local artist to create some booth building/boxing around this. It’s already a lot of fun to use, believe me!

IMG_20161125_134002 IMG_20161125_134000

The software of this system is running on node.js, starting automatically upon booth and so far quite stable. The sequence for using the booth is this:

  • a new user comes into  the booth and holds his RFID label close to the action scanner
  • the tablet screen (here: our TV screen) is showing a welcome message and the color and music choice of the customer. These data are left by the user during the onboarding/signup process
  • the party booth raspberry pi will start playing back the music according to the profile.
  • the dotstar LEDs are colored according to the profile – in combination with the rotating disco ball, this creates a nice atmosphere in the booth later
  • the fog machine turns on for a few seconds, so the bottom of the box will be filled with fog
  • while the user is in the booth and the music is playing, pictures are taken via the raspberry pi camera. These pics appear in real-time on the tablet screen.
  • once the party is over, all pics are aggregated into an animated gif and again shared to the tablet screen.
  • the user can now select one image an it will be shared to the ylabsparty twitter account. have a look it’s already pretty cool!

All right – good for now, ready for the weekend. I hope to update you soon again, till then follow us via the ylabsparty twitter account!

Coding for Kids – our codeweek participation

The last two days, Lars and me have been on a special mission. Powered by codeweek.eu and our internal hybris/SAP CSR program, we educated about 10 kids at a school south of Munich about the basics of programming. We chose the open source and kid-friendly Kano OS for this and it was really an excellent choice.

IMG_20151014_150351

Our workshop began by us explaining the Raspberry PI and how to put it together. The kids worked in teams of two and put it all together by themselves. Most of them already knew what the different ports are used for, and really needed little help.
IMG_20151014_134205

We worked completely offline the first day, which was a good choice in retrospective. There are ways to play together on Kano OS, which is nice, but it can also be a source of distraction. So we got them started with exploring the command line (Game: Snake) and visual programming via the Game Pong on Day one. A few kids also explored the Make Art program which let’s them code a picture – it’s a more creative way to start coding.

IMG_20151015_145957

At the end of day two, some kids had started to code. You can see a simple minecraft script above. We discussed when coding is preferable to crafting and when it’s the other way around.

IMG_20151015_151500

To chill the classroom down at the end of a group minecraft session on day two, the teacher put this Raspberry PI in the middle and everybody had to visualize being in the Raspberry PI. After breathing deeply for a couple of times we all were super relaxed again. We then started to collect feedback – on Lars’ back , because that’s more fun!

IMG_20151015_153233

Many thx to all involved – the school, the teachers, SAP/hybris for letting Lars and me do this! I am sure that we’re gonna have some Raspberry PIs under the Christmas tree this year.

Max, what are you soldering there?

“Ouch! (his fingers obviously…hihihi…) I’m just building a prototype board to connect the TV screens in the labs space to a logical control system which we can then use for IoT purposes. And to connect this we need a special adapter, that’s what I’m just soldering.

IMG_2684 1

“How does that work? Not the soldering, the system.”

“It’s basically just a serial adapter which uses an Ethernet connector instead of a normal RS232 connector. And because we need something like this for a Raspberry Pi and we don’t want to stack three adapters in a row, we’re building it on a circuit board ourselves.”

“Tell us a bit more about why you’re doing this.”

“Our existing video solution in the labs space isn’t really satisfying with the function it provides. We want to have a video system that allows us to control all the time which video is playing so that we can use the system to display events generated by other IoT prototypes, e.g. Funky Retail. If a customer’s presence is detected,Funky Retail would normally just light up and a video would play. And this is all static, right? Meaning, the Raspberry Pi that is built into the Funky Retail System would then play a video. And what we could do is use this event ‘customer presence’ to trigger some action on any of the screens in the labs space. It doesn’t nee to be a simple video, it could be anything. Things like: you walk over from one screen to the next and the video follows you with the right seek position in the video itself.”

“How exactly did you solve the problem you had connecting to the TVs?”

“That’s what I’m doing right now. The thing is, each TV needs two connections. One is HDMI, which is no problem, and the other is a serial connection which we can use to control the screens we already have in the labs space. We could use some other technology to control the TVs, but the ones we have don’t support this technology. It’s called HDMI CEC (Consumer Electronics Control) and it only exists in consumer TV screens, but we have professional screens. That’s why we need to use the professional controlling option which is a serial connection. So we need two cables to each screen. And what I’m soldering right now is the adapter for a Raspberry Pi to control each screen. And this is done via an Ethernet cable. What happens is, each Raspberry Pi will have an HDMI connection to a screen and a serial connection. Then we’ll have one controlling Raspberry Pi for each screen.”

“And at the end we can control the screens from wherever we want through the internet?”

“Yes, that’s the point. The Raspberry Pis will then connect to a central broker system which is basically just a server on AWS. … Damn, too short! (I think I might be distracting Max here a bit…)
So, I’m just solving the hardware problems right now. The stories or the applications we can build based on this are much more interesting than the hardware. This will be included into other prototypes as an output source. We have a lot of input sources in our IoT prototypes, e.g. pick up events, presence event, etc., but most of our output events are flashing lights and web UIs. If what I’m building here works, we could utilise any TV screen that is totally independent from our other prototypes as an output. That’s the idea and we’ll have the applications pretty soon. The easiest one is just to replace the static environment for Funky Retail, which already plays a video when you pick something up, and uncouple the playback of the video from the controlling Raspberry Pi of Funky Retail. This totally makes sense.”

Thanks Max, hopefully we can celebrate the success soon! And be careful with that soldering iron…

First Lego, now Minecraft. Yes, we love our bricks!

20150612_120327_HDR[5]

It is continuously becoming more difficult to sustain a touch of seriousness here… Imagine one of your colleagues just finishing to play with Lego at his desk, when the next starts “working” with Minecraft and shares the fruits on the big screen in the office. Questions do arise… Okay, let’s try to explain.

Lars is currently exploring how IoT and SAP HANA fit together with a very simple approach. He’s measuring the room temperature in the office. Don’t laugh! This is serious stuff! Yeah okay, it does sound a bit ridiculous actually… but here goes: Lars connected a temperature sensor to a BeagleBone Black which sends the data to the SAP HANA Cloud Platform. From there it goes to a Raspberry PI that has Minecraft running on it and is connected to a TV. So much for the basic architecture. Here’s the idea: Lars wrote a script with which the temperature data can be displayed in a Minecraft landscape. Why? Because this is what it looks like in SAPUI5:

Screen Shot 2015-06-22 at 1.14.45 PM

Lars is planning to add mores sensors. Suggestions, anyone? Maybe even with a commerce related background…?