Dustin Curtis wrote a fantastic article discussing the shortcomings of voice interfaces. I’d like to discuss a few further points in relation to this problem.
Go read his thoughts first, then come back. I’ll wait.
Dustin outlines that a lack of contextual awareness makes voice interfaces less than simple to use. I’d argue that this contextual awareness goes beyond voice interfaces and software, and affects the full product and experience design. At the end of Dustin’s article, he tells people to imagine what could be done; here are a few ideas I have about contextual awareness as it relates to our devices today.
Why do I have to tell my phone when I’m holding it with one hand?
The Freaking Huge iPhone 6 Plus and the Slightly Less Huge iPhone 6 bring to light some important contextual issues related to human factors. Specifically for those who have smaller hands (most applicably, women), using the phone with one hand becomes difficult. In fact, Apple has introduced a strange accessibility mode that brings the top of the screen down by hitting the home button a few times in a row.
That’s nice of them to help out, but why do I have to tell my phone how to be accessible every time I use it? Personalized accessibility should come as a standard. It shouldn’t be that difficult of a task. In fact, how cool would it be if I could tell my phone how well I could reach, and it remember that? Perhaps adapt my interface to work with my hand better. Or maybe track where my thumb is and move the icons nearer to it, magnet-style. Seems relatively doable if Amazon’s phone can track my eyes.
This kind of context is the low-hanging fruit; the things that continuously provide a positive return in user experience. This is the evolution of “user account preferences”.
My life has modes; why doesn’t my computer?
If you are like me, you have different modes of being. While I’m at work, I’m in work mode. Not everyone treats this issue in the same way, but perhaps you’ve experienced this before. This is especially important for people who use their computers in multiple contexts. I’d like to have a “mode” switcher on my computer. I should be able to flip a switch and open an environment on my computer that helps me focus, relax, or accomplish a specific set of tasks. For example, to record a screencast, I need my desktop cleared, a specific wallpaper set, a particular resolution on my screen, Do Not Disturb turned on, and an array of programs opened.
This should be a contextual setup on my computer, but it’s not.
Maybe you have kids that you want to allow to use your computer, but you don’t want to set up full user accounts for them. Why can’t you easily set access control and flip a switch to change contexts?
It’s very simple to make this happen - in fact, on a few occasions, I have set up scripts to make these kinds of contexts happen with a simple command. But unfortunately, my operating system doesn’t do this on its own, and my contexts shift over time. Thus, maintaining scripts to handle this for me is unrealistic.
Mobile isn’t replacing my laptop any time soon - it’s just expanding it at a different resolution.
It seems that our desktop/laptop computers are staying relatively stagnant. The atmosphere of operating systems have changed in our lives because of mobile, but the desktop isn’t keeping up. My computer should have sense-abilities that are currently reserved only for my phone, or at the very least it should be tightly connected with my mobile devices and use the sensors in my phone to inform it of context. My health tracking should be most accessible
on my laptop, and should change in resolution as I move from the larger and more flexible laptop to more limited devices.
My laptop should be as smart or smarter than my mobile device. Until the resolution of interaction on a phone matches or surpasses that of the interaction on a desktop computer, desktop OS innovation must keep up.