Standard service - a fixed amount of work that is performed at a fixed price.  Post Service

  

Tuesday, 24 September 2019 08:25

Using Edge AI for Smart Mirror v2

Written by  https://cxlabs.sap.com/2019/09/25/using-edge-ai-for-smart-mirror-v2/
Rate this item
(0 votes)

The original smart mirror that our team has created is now a few years old and we thought the time and technology maturity is right to work on an updated version

that uses Edge AI. Using a Raspberry PI as a Edge Computing device in combination with Google’s Coral USB Accelerator, we’re now able to slightly twist this showcase and make the product training phase part of the demo. Here’s what we’re trying to achieve with the smart mirror v2:
  • From a technology point of view, it will be a showcase demonstrating the power of Edge Computing and AI – Edge AI. We intend to use Tensorflow Light running on the Raspberry PI in combination with the Coral USB Accelerator to speed up the processing of image data.
  • The core idea/use case stays roughly the same: a customer steps in front of the mirror and a the mirror recognises a product it trained to detect. Based on the product, recommendations are offered to the customer. These recommendations come from a
  • We add the elements of the Teachable Machine (also our podcast) to allow a shop manager to train new products to be detected quickly. This training phase now becomes part of the demonstration and therefore the experience for visitors of our showroom in Munich.

As you can read, we’re changing the original idea to some degree and turn it into a bit more techy and explanatory showcase. At this point the new showcase is not done yet, we’re giving you some insights on what we’ve achieved so far and what problems we hope to tackle next. Both Lars and me will be happy to get feedback via Twitter!

Below is a first rough animation of how the training and classification phase looks like right now:

Technical Flow and Components

Let’s quickly step through the process of the demo, meanwhile explaining the key components:

  • The edge computing device is a Raspberry Pi 4 with a Coral USB Accelerator to speed up the classification processing.
  • A portrait HDMI screen is connected to the Raspberry PI. Also, a USB web cam or the Pi Camera is connected to the Raspberry PI.
  • Rasbian Buster needs to be configured for portrait mode and will run a fullscreen web browser (kiosk mode) to present a web UI to the user. The top part will be a live stream captured via the camera above the screen. This results in the user getting a mirror-like experience and live feedback to position herself correctly for the training/classification phases.
  • The Javascript running as part of the fullscreen web UI will constantly snap pictures via the getUserMedia APIs and send the compressed JPEG data to a local API endpoint via HTTP POST.
  • The web server running will accept the pictures and use them for training/classification using Tensorflow in combination with the Edge TPU via the Coral USB Accelerator.
  • Based on the detection results, we can request recommendations via the C/4 Hana Marketing Cloud and cache the results
  • The product recommendations are presented to the customer via the web UI.

Next up…

As you can see in the animation above, our little web app currently needs a lot of polishing. Also, so far we’ve not even implemented a reset functionality. These features will be implemented over the next days and we also hope to start some internal discussions about the UI and how to replace this showcase in our labs.

Read 53 times

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.