Intelligent Algorithms, I Presume
April 19, 2013 | Citizen Inc.
At Interaction 13 in Toronto, Citizen’s Nicki Vance gave a talk on the algorithms of smartphones, likening them to Sherlock Holmes’ famous sidekick Watson. Read on to find out why even the smartest apps have stayed so “elementary.”
In January, Citizen’s Nicki Vance attended Interaction 13 in Toronto, the 10th annual interactive design conference, to give a presentation on the design of smart apps. According to her, apps have a long way to go before they can truly anticipate our needs or intelligently synthesize all the data our devices are capable of capturing. But designers can take these limitations into account and shape the way they interact with users to “feel” or “seem” intuitive. For that reason, Nicki compared them to Dr. Watson, the famed sidekick in Sir Arthur Conan Doyle’s Sherlock Holmes mysteries, who is always serving up valuable observations for Holmes’ analytical mind to process. That is, Watson can’t solve the whodunnit himself, but he can help Holmes crack a case through supportive insights. Below, you can watch her presentation and, for additional context, read a brief Q&A with Nicki to hear more of her thoughts on how smart apps can be more intuitive.
Your presentation talked about the thinking that goes into designing smart apps. What informed your thinking around the subject?
We’ve had a couple of projects at Citizen working with a company that wanted to get involved in data visualization from mobile devices. Your phone has a lot of sensors on it and when you cross-reference the data that is siloed in each of your apps with these sensors along with some general data about weather or big events, you potentially have a robust picture of what’s going on in a person’s daily life. We were lucky to be able to begin first with some true concepting to answer the question, “What would your phone tell you about yourself if all of this data could be interpreted intelligently?” and “What would you use that information for?”
This is great because we believe first and foremost in designing experiences based on people, not on what types of data are available, which limits creativity and innovation. Of course, we need to be practical and, in all projects, there’s a time to address the requirements and limits of what we can design. At that stage, we had existing data sources from another product that we needed to integrate into the new designs, but no algorithms for interpreting the data. I realized that when we had been surveying the landscape of intelligent applications, the complex ones are usually long-term projects that have been iterated on in the wild. The developers working on these projects had the opportunity to monitor live data as they continued to tweak their algorithms. They didn’t seem so smart when they started, but, over time, they became more accurate and effective at interpreting their data into human-digestible meaning. In our role as a design firm, we could only help with the first stage, so we wrote algorithms that would produce complex insights into the data that we knew we would have. These served as a template for what could be extended.
Good smart apps have a sense of intuition, which you compare to Watson of Sherlock Holmes fame. How, as a designer, do you take an intuitive experience into account?
Intuition is in the details — in subconscious-level observations. In our algorithms, we tried to look for simple observations that a phone could make because it has perfect memory, but that might just be beyond what a person would be consciously aware of, to give users the feel that the app was intuiting insights from their minute behaviors. For example, we could cross-reference visits to the gym with your mood as it can be interpreted from your text-based social interactions and check for a pattern of less exercise when you are bummed out. Suggesting exercise in these times could create an opportunity to bond with the user if it confirms a feeling they have been having in the back of their minds.
You describe a good smart function behaving “like a professional personal assistant, smoothing the way.” What helps lend this ‘helpful’ quality to indexical systems? How can we keep designing humanity into our apps?
Being helpful can be touch-and-go. I spent a couple of years working at the reference desk in a library. Our goal was to be helpful in answering whatever question someone brought us. We did our best to make sure all of the reference information we needed was on hand. Knowing what information would be needed came from experience and assumptions about the demographics of our users. The reference desk serviced an art and architecture university, so we knew what subject matter our students needed and knowing that they are students also gave us clues into the context in which they are seeking this information.
As I built my experience around what students might need, my interactions with them transitioned from open-ended questions to skipping ahead to what I knew they were looking for. I also had to learn how to say I had run into a wall and exhausted all of my resources, which is important for the user to understand. To sum it up, everyone has a learning period, even apps, and it’s best to be honest that you are still learning. Learning to be helpful requires gauging interactions, measuring success, and being receptive to feedback.