Cognitive B2I Advertising with IBM Watson, Bluemix, and IBM Cloud Object Storage

 

If you’ve been around enough conversations in our industry about things like machine learning, AI, cognitive, and analytics, inevitably someone will bring up the scene from the movie “The Minority Report,” where Tom Cruise is walking quickly through a retail area and cameras are scanning his face and showing him advertisements personalized for him (even calling him out by name!).  Just a matter of time until that’s a reality we always say, right?

Well, thanks to the combination of IBM Watson, Bluemix, Cloud Object Storage (formerly Cleversafe), and Spectrum Storage (and a little creative coding from our BlueChasm dev team), we can honestly say that that vision of personalized advertising has become a reality in the form of a platform that our team built that we’re calling “Visual Communications.”

Just like in the movie, our Visual Communications platform enables any enterprise to interact directly with INDIVIDUALS in a personalized manner by showing them an advertisement targeted at their demographic group (age/gender/etc) based on what the screen-mounted camera detects from “looking” at their face.

The B2I (Business-to-Individual) platform uses a unique combination of cognitive technologies from the IBM Cloud tech stack.  What’s intriguing about this new B2I model (which is the term that was used recently by IBM CEO Ginni Rometty) is that its truly a step beyond B2B and even B2C, in its the ability to use cognitive capabilities to engage directly with and cater to specific individuals that might be customers or other important ecosystem partners.

This level of “on the spot” personalized engagement, whether it be targeted advertisements or other focused interactions, will enable businesses to serve their customers in extremely unique and innovative ways, especially when combined with previously gathered business data about that customer/individual.  We’ve included some possible use case suggestions in the video, but there are almost a limitless number of possibilities that build on top of our base open platform.

 

 

So how did we build this platform?

At a high-level, we used the following building blocks as part of our tech stack:

  • IBM Watson Visual Recognition:  Watson Ecosystem API that enabled some of our visual recognition capabilities in our platform
  • IBM Bluemix:  The master cloud development platform in the IBM public cloud (of which Watson is a service within)
  • IBM Cloud Object Storage (Cleversafe):  Our object storage service that our platform uses to serve up the advertisements in the form of images via its S3-compatible API (located on-premise in our lab)
  • IBM Spectrum Storage/Storwize:  Part of our IBM Cleversafe Object Storage implementation utilizes IBM Storage Systems under the covers, specifically the extremely versatile IBM Storwize line (located on-premise in our lab)
  • Python:  Our programming language of choice in this specific use case
  • BlueChasm:  Our awesome BlueChasm dev team (who actually dreamed up this use case, pulled everything together, and built the platform)

 

Thank you for reading this blog entry– hope you found it interesting!

Stay tuned for more things that we’re working on, and feel free to reach out to us anytime via our website or Twitter (@MarkIIISystems), if you’d like to talk more about how we built this platform or catch up on the latest on what else we have going on!

 

This entry was posted in Bluemix, Cleversafe, Cognitive Computing, IBM Watson, storage, Storwize V3700, Storwize V5000, Storwize V7000 and tagged , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply