Building a connected object is usually seen as a hardware challenge. But in order for the object to be useful in real conditions, the software also has to be extremely well designed. Quite often, the interface that lets the user interact with the object is very limited by the nature itself of the object. One way to cope with this is to analyze usage data in order to predict the user’s next move.
Interface limitations and simple workarounds
Connected objects’ interfaces are not as rich as smartphones’. You don’t have touch screens, so the objects have to be dead simple to use for people to adopt them. It’s best to reduce the required number of interactions as much as possible. Imagine, for instance, that your spouse or children had to input their name every time they step on the Withings scale or before using the Kolibree tooth brush. It would just be too inconvenient for anyone to use. The object’s software has to automatically figure out who’s using it. In the case of the scale, it’s quite simple to do by looking at everyone’s previous measurements.
Vehicles and wearables
The key to reduce interactions with connected objects is to predict what you, the user, are up to. For objects that you would use when going places, such as vehicles and wearables, this would mean predicting your next destination.
Consider the Bike Finder app for Google Glass which allows you to find public bikes in these cities. You launch the app with your voice (“ok glass, find a bike station”), it guides you to the nearest station where a bike is available, using the built-in Navigation, and off you go. But what if it could automatically predict whether you’re going home, to work, to the market, to a friend’s place, etc., all based on your previous trips? The navigation could continue until your final destination, without any extra input from you.
Ford did something like this in a car prototype that predicts its destination. Actually, you probably have a similar feature on your smartphone that gives you the time to get to your next destination without you having to input anything (see screenshot below).
The beautiful thing about the Ford prototype is that it is based on Google Prediction API, a service that abstracts away the complexity of creating predictive models from data, that you can use yourself in your own app and that you can test out for free.
Big Data and Connected Objects
Last month I came across an article on Disney’s MagicBand that John Foreman, Chief Data Scientist at Mailchimp, tested out. The MagicBand is a bracelet that Disney sends you via mail prior to your visit to the park. The idea is that you wear it when you get there and it lets you access your hotel room, pay at restaurants, etc. As you’d expect, the band also does a whole lot of tracking. Some of it is used to personalize the park goer’s experience, as Foreman relates in his anecdote of Mickey talking to his kids about Jack Sparrow rather than Buzz Lightyear.
But the business people at Disney also analyze tracking data to get insights into what people do in their parks, and they use these insights to somehow generate more revenue. They can use the data to predict which attraction or which restaurant you’ll go to next. They can then influence your choices with recommendations or special offers, and hopefully spread out the flows of people in their parks. Hopefully, this would improve people’s experience by removing bottlenecks that result in wait time and frustration. As a consequence, it would make people spend more time in the parks and it would bring more business to the restaurants.
[Your connected object here]
The idea of predicting a user’s next move is something that Bret Victor, a designer behind the first iPad, visualized in 2006 — see the section on “Inferring context from history” in his essay Magic Ink. Prediction APIs are making the technology for this accessible and connected objects is a domain where such predictions are extremely useful, so I can’t wait to see if people will take advantage of this idea at the Connected Conference competition! Also, you should check out “The brain behind the hardware: why context matters”, a panel with Ami Ben David (EverythingMe), Filip Maertens (Argus Labs) and Romain Dillet (TechCrunch) that’s taking place at the conference on June 19.
Louis is the author of Bootstrapping Machine Learning, the first guide to smarter apps using Prediction APIs. He is also a data consultant and he helps companies exploit the value of their data.